ESANN 2021 Proceedings 2021
DOI: 10.14428/esann/2021.es2021-88
|View full text |Cite
|
Sign up to set email alerts
|

The LVQ-based Counter Propagation Network -- an Interpretable Information Bottleneck Approach

Abstract: In this paper we present a realization of the informationbottleneck-paradigm by means of an improved counter propagation network. It combines an unsupervised vector quantizer for data compression with a subsequent supervised learning vector quantization model. The approach is mathematically justified and yields an interpretable model for classification under the constraint of data compression, which is not longer independently learned from the classification task.* M.K., M.M.B. and D.S. were supported by grant… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…Contributions of the special session on "Interpretable Models in Machine Learning and Explainable Artificial Intelligence" cover a broad range of the previously mentioned aspects: interpretability of prototype-based methods for classification and efficient data representation [49,20,29], interpretability of Support Vector Machines (SVMs) [54], interpretability of random forests [38], explainability of black-box models [12,26,35], and informativeness of linguistic properties in word representations [5].…”
Section: Contributions From Esann 2021mentioning
confidence: 99%
See 1 more Smart Citation
“…Contributions of the special session on "Interpretable Models in Machine Learning and Explainable Artificial Intelligence" cover a broad range of the previously mentioned aspects: interpretability of prototype-based methods for classification and efficient data representation [49,20,29], interpretability of Support Vector Machines (SVMs) [54], interpretability of random forests [38], explainability of black-box models [12,26,35], and informativeness of linguistic properties in word representations [5].…”
Section: Contributions From Esann 2021mentioning
confidence: 99%
“…With respect to prototype-based models, the approach described by Kaden et al [29] realizes information bottleneck learning by combining counterpropagation and LVQ, whereas Graeber et al [20] uses context information and prototype adaption while inference for better LVQ performance and interpretability. Taylor and Merényi [49] propose an improvement to t-SNE which allows automated specification of its perplexity parameter using topological information about a data manifold revealed through prototype-based learning.…”
Section: Contributions From Esann 2021mentioning
confidence: 99%
“…Here, we present an approach inspired by [6,7] that maps graph data to a proximity space by a sensoric response principle (SRP) [8,9] based on different graph comparison strategies allowing for relevance learning of these by Generalized Matrix LVQ (GMLVQ) as a prominent interpretable classifier model [10,11]. This SRP, significantly reduces the number of kernel computations and, hence, makes this method practicable also for huge data sets.…”
Section: Introductionmentioning
confidence: 99%