2021
DOI: 10.3233/faia210211
|View full text |Cite
|
Sign up to set email alerts
|

Feature Back-Tracking with Sparse Deep Belief Networks

Abstract: To find a way of the interpretability of deep learning, in this paper, a features back-tracking (FBT) approach based on a sparse deep learning architecture is proposed. Firstly, for a deep belief network (DBN), both the Kullback-Leibler divergence of the hidden neurons and the L1 norm penalty on the connection weights are introduced. Thus, the sparse response mechanism as well as the sparse connection of the brain neurons can be simulated directly. That means the DBN can learn a sparse framework and an effecti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 19 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?