Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of 2018
DOI: 10.1145/3236024.3236050
|View full text |Cite
|
Sign up to set email alerts
|

Applications of psychological science for actionable analytics

Abstract: Actionable analytics are those that humans can understand, and operationalize. What kind of data mining models generate such actionable analytics? According to psychological scientists, humans understand models that most match their own internal models, which they characterize as lists of "heuristic" (i.e., lists of very succinct rules). One such heuristic rule generator is the Fast-and-Frugal Trees (FFT) preferred by psychological scientists. Despite their successful use in many applied domains, FFTs have not… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
45
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 38 publications
(45 citation statements)
references
References 62 publications
0
45
0
Order By: Relevance
“…The best tree (as discovered on the training data) is then applied to the test data. Despite the apparent simplicity of FFTs, Chen et al [18] reported that this method performs dramatically better than many prior results seen at recent ICSE conferences [3], [36]. (including those that used hyperparameter optimization and data pre-processing with SMOTE).…”
Section: Fast and Frugal Trees (Ffts)mentioning
confidence: 95%
See 2 more Smart Citations
“…The best tree (as discovered on the training data) is then applied to the test data. Despite the apparent simplicity of FFTs, Chen et al [18] reported that this method performs dramatically better than many prior results seen at recent ICSE conferences [3], [36]. (including those that used hyperparameter optimization and data pre-processing with SMOTE).…”
Section: Fast and Frugal Trees (Ffts)mentioning
confidence: 95%
“…They say a "good" defect predictor selects the 20% of files containing 80% of the defects In the literature, this 20/80 rule is often called P opt 20 (the percent of the bugs found after reading 20%). P opt 20 is widely used in the literature and, for details on that measure, we refer the reader to those publications [18], [42], [48], [62], [64], [69], [69], [111]. For this paper, all we need to say about P opt 20 is the conclusions reached from this metric are nearly the same as the conclusions reached via G-score.…”
Section: Evaluation Criteriamentioning
confidence: 98%
See 1 more Smart Citation
“…• Software defect prediction (classifying modules into "buggy" or otherwise [3], [15], [22], [24], [36], [56], [57]); • Software bug report text mining (to find severity [3], [44]). Table 1 is a partial list of some of the tunings that might be explored.…”
Section: Introductionmentioning
confidence: 99%
“…Recently it was discovered how to (a) save most of that CPU cost while at the same time (b) find better tunings. As discussed later, a method called "FFtrees" [51] (which just selects a best model within a small forest of shallow decision trees) generates much better predictions than supposed state-of-the-art results obtained after CPU intensive tuning [15]. This is strange since standard tuning tries thousands of options, but FFtrees tries just a dozen.…”
Section: Introductionmentioning
confidence: 99%