Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2015 IEEE/ACM 37th IEEE International Conference on Software Engineering 2015
DOI: 10.1109/icse.2015.59
|View full text |Cite
|
Sign up to set email alerts
|

When and Why Your Code Starts to Smell Bad

Abstract: Abstract-In past and recent years, the issues related to managing technical debt received significant attention by researchers from both industry and academia. There are several factors that contribute to technical debt. One of these is represented by code bad smells, i.e., symptoms of poor design and implementation choices. While the repercussions of smells on code quality have been empirically assessed, there is still only anecdotal evidence on when and why bad smells are introduced. To fill this gap, we con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

11
154
0
4

Year Published

2015
2015
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 200 publications
(169 citation statements)
references
References 46 publications
(71 reference statements)
11
154
0
4
Order By: Relevance
“…Their goal was to define a method able to discriminate between God Class instances that are introduced by design and God Class instances that are introduced unintentionally. Recently, Tufano et al (2015) investigated when code smells are introduced by developers, and the circumstances and reasons behind their introduction. They showed that most of the times code artifacts are affected by smells since their creation and that developers introduce them not only when implementing new features or enhancing existing ones, but sometimes also during refactoring.…”
Section: Diffuseness and Evolution Of Code Smellsmentioning
confidence: 99%
See 1 more Smart Citation
“…Their goal was to define a method able to discriminate between God Class instances that are introduced by design and God Class instances that are introduced unintentionally. Recently, Tufano et al (2015) investigated when code smells are introduced by developers, and the circumstances and reasons behind their introduction. They showed that most of the times code artifacts are affected by smells since their creation and that developers introduce them not only when implementing new features or enhancing existing ones, but sometimes also during refactoring.…”
Section: Diffuseness and Evolution Of Code Smellsmentioning
confidence: 99%
“…Such tools exploit different types of approaches, including metrics-based detection (Lanza and Marinescu 2010;Moha et al 2010;Marinescu 2004;Munro 2005), graph-based techniques (Tsantalis and Chatzigeorgiou 2009), mining of code changes (Palomba et al 2015a), textual analysis of source code (Palomba et al 2016b), or search-based optimization techniques (Kessentini et al 2010;Sahin et al 2014). On the other side, researchers investigated how relevant code smells are for developers (Yamashita and Moonen 2013;Palomba et al 2014), when and why they are introduced (Tufano et al 2015), how they evolve over time (Arcoverde et al 2011;Chatzigeorgiou and Manakos 2010;Lozano et al 2007;Ratiu et al 2004;Tufano et al 2017), and whether they impact on software quality properties, such as program comprehensibility (Abbes et al 2011), fault-and change-proneness (Khomh et al 2012;Khomh et al 2009a;D'Ambros et al 2010), and code maintainability Moonen 2012, 2013;Deligiannis et al 2004;Li and Shatnawi 2007;Sjoberg et al 2013).…”
Section: Introductionmentioning
confidence: 99%
“…Considering that identifying components is less complicated task when the identifiers are comprised by significant terms [9,14], the presented analysis can support the analyzability and modifiability, getting significant terms by their frequent use (been representatives). Furthermore, if the naming patterns are not found in a software implementation, it could evidence poor choices of design and implementation with regard to used terms, affecting the test case artifacts [17]; in this sense, the analysis can also support the testability. Table 1 shows some data about the projects considered henceforth: LOC (lines of code), QF (quantity of files), QP (quantity of packages), and QT (quantity of terms).…”
Section: Resultsmentioning
confidence: 99%
“…We carried out the training with the popular machine learning library called Weka. 18 It contains algorithms from different categories, for instance Bayesian methods, support vector machines, and decision trees.…”
Section: Discussionmentioning
confidence: 99%
“…Out of these databases Terapromise is the most up to date and has also a coding rule violation [18] database. Based on the capability of the tool we used for static source code analysis, we gathered C&K metrics, rule violations, and software code clone related metrics such as number of clone instances located in the given source code elements.…”
Section: Related Workmentioning
confidence: 99%