Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2012
DOI: 10.1145/2339530.2339556
|View full text |Cite
|
Sign up to set email alerts
|

Intelligible models for classification and regression

Abstract: Complex models for regression and classification have high accuracy, but are unfortunately no longer interpretable by users. We study the performance of generalized additive models (GAMs), which combine single-feature models called shape functions through a linear function. Since the shape functions can be arbitrarily complex, GAMs are more accurate than simple linear models. But since they do not contain any interactions between features, they can be easily interpreted by users.We present the first large-scal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
297
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 370 publications
(298 citation statements)
references
References 16 publications
1
297
0
Order By: Relevance
“…In this context, a system is considered interpretable if "a user cannot only see, but also study and understand how inputs are mathematically mapped to outputs" (Doran et al 2017), with regression models (Schielzeth 2010) or generalized additive models (Lou et al 2012) serving as examples. However, as discussed for instance in Vellido et al (2012), Rudin (2014), interpretability in these cases refers almost exclusively to a mathematical property of the models, allowing for a certain degree of knowledge extraction from the model and subsequent interpretation by domain experts.…”
Section: Comprehensibility and Explanation In Machine Learningmentioning
confidence: 99%
“…In this context, a system is considered interpretable if "a user cannot only see, but also study and understand how inputs are mathematically mapped to outputs" (Doran et al 2017), with regression models (Schielzeth 2010) or generalized additive models (Lou et al 2012) serving as examples. However, as discussed for instance in Vellido et al (2012), Rudin (2014), interpretability in these cases refers almost exclusively to a mathematical property of the models, allowing for a certain degree of knowledge extraction from the model and subsequent interpretation by domain experts.…”
Section: Comprehensibility and Explanation In Machine Learningmentioning
confidence: 99%
“…Let H 1 = u∈U 1 Hu denote the Hilbert space of functions that have additive form F (x) = u∈U 1 fu(xu) on univariate compnents; we call those components shape functions [19]. Similarly let H = u∈U Hu denote the Hilbert space of functions of x = (x1, ..., xn) that have additive form…”
Section: Problem Definitionmentioning
confidence: 99%
“…Standard additive modeling only involves modeling individual features (also called feature shaping). Previous research showed that gradient boosting with ensembles of shallow regression trees is the most accurate method among a number of alternatives [19].…”
Section: Fitting Generalized Additive Modelsmentioning
confidence: 99%
See 2 more Smart Citations