2012 9th IEEE Working Conference on Mining Software Repositories (MSR) 2012
DOI: 10.1109/msr.2012.6224300
|View full text |Cite
|
Sign up to set email alerts
|

Think locally, act globally: Improving defect and effort prediction models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
78
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 95 publications
(79 citation statements)
references
References 29 publications
1
78
0
Order By: Relevance
“…No learner is best for all data sets [4] since data can change over time, making prior results outdated [39]. Hence, many researchers now explore "local learners" that eschew single global conclusions in favor of more context-dependent conclusions [1], [22], [36].…”
Section: Algorithm Tuningmentioning
confidence: 99%
See 1 more Smart Citation
“…No learner is best for all data sets [4] since data can change over time, making prior results outdated [39]. Hence, many researchers now explore "local learners" that eschew single global conclusions in favor of more context-dependent conclusions [1], [22], [36].…”
Section: Algorithm Tuningmentioning
confidence: 99%
“…We prefer the IDEA algorithm shown above (since it runs in linear time) followed by some case-based reasoning tool such as the W tool (see Figure 3). Other teams have generated clusters like Figure 2.d using recursive regression methods [1]. Regardless of how the landscape is generated, the general principle is the same:…”
Section: Landscape Miningmentioning
confidence: 99%
“…[1,38,48]), as opposed to other studies which focus on direct predictions of project attributes while avoiding the human element (e.g. [34,9,31,4,28]). This complements recent works which have shown that complexity metrics, e.g.…”
Section: Introductionmentioning
confidence: 99%
“…More recent work has been emphasising the relatively good predictive performance achieved by ensembles of learning machines (Kultur et al 2009;Minku and Yao 2013a;Kocaguneli et al 2012) and local methods that make estimations based on completed projects similar to the project being estimated (Minku and Yao 2013a;Menzies et al 2013;Bettenburg et al 2012). For instance, Regression Trees (RTs), Bagging ensembles of MultiLayer Perceptrons (Bag + MLPs) and Bagging ensembles of RTs (Bag + RTs) have been shown to perform well across several datasets (Minku and Yao 2013a).…”
Section: For See Assuming No Chronologymentioning
confidence: 99%