2016
DOI: 10.1109/tit.2016.2517006
|View full text |Cite
|
Sign up to set email alerts
|

On the Minimax Risk of Dictionary Learning

Abstract: We consider the problem of learning a dictionary matrix from a number of observed signals, which are assumed to be generated via a linear model with a common underlying dictionary. In particular, we derive lower bounds on the minimum achievable worst case mean squared error (MSE), regardless of computational complexity of the dictionary learning (DL) schemes. By casting DL as a classical (or frequentist) estimation problem, the lower bounds on the worst case MSE are derived by following an established informat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(37 citation statements)
references
References 47 publications
0
37
0
Order By: Relevance
“…Similarly, recent studies [15] on the minimax risk for the dictionary identifiability problem showed that the necessary number of samples for reliable reconstruction, up to a given mean squared error, of a KS dictionary within its local neighborhood scales with (m K k=1 n k m k ) compared to (m K k=1 n k m k ) for unstructured dictionaries of the same size [9].…”
Section: Motivationsmentioning
confidence: 87%
“…Similarly, recent studies [15] on the minimax risk for the dictionary identifiability problem showed that the necessary number of samples for reliable reconstruction, up to a given mean squared error, of a KS dictionary within its local neighborhood scales with (m K k=1 n k m k ) compared to (m K k=1 n k m k ) for unstructured dictionaries of the same size [9].…”
Section: Motivationsmentioning
confidence: 87%
“…In the case of unstructured dictionaries, several works do provide analytical results for the dictionary identifiability problem [14]- [21]. These results, which differ from each other in terms of the distance metric used, cannot be trivially extended for the KS-DL problem.…”
Section: B Relationship To Prior Workmentioning
confidence: 99%
“…In this work, we focus on the Frobenius norm as the distance metric. Gribonval et al [20] and Jung et al [21] also consider this metric, with the latter work providing minimax lower bounds for dictionary reconstruction error. In particular, Jung et al [21] show that the number of samples needed for reliable reconstruction (up to a prescribed mean squared error ε) of an m × p dictionary within its local neighborhood must be at least on the order of N = Ω(mp 2 ε −2 ).…”
Section: B Relationship To Prior Workmentioning
confidence: 99%
“…2) Second, here we focused on upper bounds on the relative error. To assess the tightness of these bounds, we hope to prove minimax lower bounds on the relative error similarly to Jung et al [41]. 3) Third, as mentioned in Section I-A1, our geometric assumption in (2) can be considered as a special case of the near-separability assumption for NMF [13].…”
Section: B Future Work and Open Problemsmentioning
confidence: 99%