2021
DOI: 10.1109/tnnls.2020.3005348
|View full text |Cite
|
Sign up to set email alerts
|

Deep Residual Autoencoders for Expectation Maximization-Inspired Dictionary Learning

Abstract: We introduce a neural-network architecture, termed the constrained recurrent sparse auto-encoder (CRsAE), that solves convolutional dictionary learning problems, thus establishing a link between dictionary learning and neural networks. Specifically, we leverage the interpretation of the alternatingminimization algorithm for dictionary learning as an approximate Expectation-Maximization algorithm to develop autoencoders that enable the simultaneous training of the dictionary and regularization parameter (ReLU b… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 27 publications
(16 citation statements)
references
References 39 publications
1
15
0
Order By: Relevance
“…The SOM, commonly known as the Kohonen map, is a pattern exploration and visualization model for high-dimensional datasets. Teuvo Kalevi Kohonen was the first to create this pattern in 1982 [ 42 ]. SOM is a clustering methodology that provides conventional statistical methods to identify groups in a dataset.…”
Section: Basic Concepts and Corresponding Terminologiesmentioning
confidence: 99%
“…The SOM, commonly known as the Kohonen map, is a pattern exploration and visualization model for high-dimensional datasets. Teuvo Kalevi Kohonen was the first to create this pattern in 1982 [ 42 ]. SOM is a clustering methodology that provides conventional statistical methods to identify groups in a dataset.…”
Section: Basic Concepts and Corresponding Terminologiesmentioning
confidence: 99%
“…where λ is a sparsity-enforcing parameter, and the norm constraints are to avoid scaling ambiguity. Following a similar approach to [17,23,24], we construct an autoencoder where its encoder maps z n into a sparse filter x n by unfolding T iterations of a variant of accelerated proximal gradient algorithm, FISTA [22], for sparse recovery. Specifically, each unfolding layer performs the following iteration…”
Section: Network Architecturementioning
confidence: 99%
“…Recent works proposed model-based neural networks to address computational efficiency [17,18], but still require full measurements for recovery. To enable compression, Chang et al [19] proposed an autoencoder, called RandNet, for dictionary learning.…”
Section: Introductionmentioning
confidence: 99%
“…Since they focus on unrolling the multiplicative update (MU) algorithm (see also the earlier work of (Hershey et al, 2014) in this context), their network structure is quite different from both (Xiong et al, 2021) and the algorithm we investigate in this work. Apart from the above articles, we would like to highlight that several works about unrolling methods for related-to-BSS problems, such as dictionary learning (Tolooshams et al, 2020) or even Magnetic Resonance Imaging (Arvinte et al, 2021), exist. However, as such problems have different purposes and challenges, it is beyond the scope of this article to review them.…”
Section: Related Workmentioning
confidence: 99%