2019
DOI: 10.48550/arxiv.1908.01580
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The HSIC Bottleneck: Deep Learning without Back-Propagation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(14 citation statements)
references
References 0 publications
0
14
0
Order By: Relevance
“…8). Like Ma, Lewis, and Kleijn (2019), we use a linear readout layer trained for 1000 epochs with gradient descent to map between the HSIC-learned output and the label encoding. This is only required for our experiments.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…8). Like Ma, Lewis, and Kleijn (2019), we use a linear readout layer trained for 1000 epochs with gradient descent to map between the HSIC-learned output and the label encoding. This is only required for our experiments.…”
Section: Methodsmentioning
confidence: 99%
“…Though computing the mutual information of two random variables requires knowledge of their distributions, Ma, Lewis, and Kleijn (2019) propose using the Hilbert-Schmidt Independence Criterion (HSIC) as a proxy for mutual information. Given a finite number of samples, N , a statistical estimator for the HSIC (Gretton et al, 2005) is…”
Section: The Information Bottleneckmentioning
confidence: 99%
See 3 more Smart Citations