2020
DOI: 10.48550/arxiv.2003.07132
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GAMI-Net: An Explainable Neural Network based on Generalized Additive Models with Structured Interactions

Abstract: The lack of interpretability is an inevitable problem when using neural network models in real applications. In this paper, a new explainable neural network called GAMI-Net, based on generalized additive models with structured interactions, is proposed to pursue a good balance between prediction accuracy and model interpretability.The GAMI-Net is a disentangled feedforward network with multiple additive subnetworks, where each subnetwork is designed for capturing either one main effect or one pairwise interact… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(11 citation statements)
references
References 33 publications
0
10
0
Order By: Relevance
“…NAM can be viewed as a neural network implementation of GAM where shape functions are selected from a class of functions which can be realized by neural networks with a certain architecture. Similar approaches using neural networks for constructing GAMs and performing shape functions called GAMI-Net and the Adaptive Explainable Neural Networks (AxNNs) were proposed by Yang et al [15] and Chen et al [16], respectively. In order to avoid the neural network overfitting, an ensemble of gradient boosting machines producing shape functions was proposed by Konstantinov and Utkin [76].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…NAM can be viewed as a neural network implementation of GAM where shape functions are selected from a class of functions which can be realized by neural networks with a certain architecture. Similar approaches using neural networks for constructing GAMs and performing shape functions called GAMI-Net and the Adaptive Explainable Neural Networks (AxNNs) were proposed by Yang et al [15] and Chen et al [16], respectively. In order to avoid the neural network overfitting, an ensemble of gradient boosting machines producing shape functions was proposed by Konstantinov and Utkin [76].…”
Section: Related Workmentioning
confidence: 99%
“…The impact of every feature on the prediction is determined by its corresponding shape function. Similar methods using neural networks for constructing GAMs and performing shape functions called GAMI-Net and the Adaptive Explainable Neural Networks (AxNNs) were proposed by Yang et al [15] and Chen et al [16], respectively. LIME as well as other methods have been successfully applied to many machine learning models for explanation.…”
Section: Introductionmentioning
confidence: 99%
“…A linear combination of neural networks implementing shape functions in GAM is a basis for NAMs [15] which sufficiently extend the available explanation methods and, in fact, open a door for constructing a new class of methods. Similar approaches using neural networks to construct GAMs and to perform shape functions are the basis of methods called GAMI-Net [16] and AxNNs [17]. An architecture called the regression network which can be also regarded as a modification of NAM is proposed by O'Neill et al [61].…”
Section: Related Workmentioning
confidence: 99%
“…the GAM outcome is a linear combination of some functions of features. Several explanation models based on GAMs have been proposed, including the well-known Explainable Boosting Machine (EBM) [14], the Neural Additive Model (NAM) [15], GAMI-Net [16], the Adaptive Explainable Neural Networks [17]. The main peculiarity of the aforementioned surrogate models is that the influence or shape functions from GAM are obtained by training neural networks (in NAM or GAMI-Net) or by training the functions iteratively (EBM).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation