2014
DOI: 10.1109/tsp.2014.2332441
|View full text |Cite
|
Sign up to set email alerts
|

Learning Parametric Dictionaries for Signals on Graphs

Abstract: Abstract-In sparse signal representation, the choice of a dictionary often involves a tradeoff between two desirable propertiesthe ability to adapt to specific signal data and a fast implementation of the dictionary. To sparsely represent signals residing on weighted graphs, an additional design challenge is to incorporate the intrinsic geometric structure of the irregular data domain into the atoms of the dictionary. In this work, we propose a parametric dictionary learning algorithm to design data-adapted, s… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
121
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 108 publications
(122 citation statements)
references
References 39 publications
1
121
0
Order By: Relevance
“…3, we illustrate the sparse approximation performance of our dictionary representation by studying the reconstruction performance in SNR on the set of testing signals for different sparsity levels. The performance is compared to that obtained by learning separately a dictionary on each graph [7], and the one obtained by the sparse decomposition in the graph wavelet dictionary [4]. We observe that multi-graph learning improves significantly the performance in comparison to SGWT.…”
Section: Snr (Db)mentioning
confidence: 97%
See 4 more Smart Citations
“…3, we illustrate the sparse approximation performance of our dictionary representation by studying the reconstruction performance in SNR on the set of testing signals for different sparsity levels. The performance is compared to that obtained by learning separately a dictionary on each graph [7], and the one obtained by the sparse decomposition in the graph wavelet dictionary [4]. We observe that multi-graph learning improves significantly the performance in comparison to SGWT.…”
Section: Snr (Db)mentioning
confidence: 97%
“…In particular, if the generating kernel is smooth, g(L) consists of a set of N columns, each representing a localized atom, generated by the same kernel, and positioned on different nodes on the graph [4], [1]. One can thus design graph operators consisting of localized atoms in the vertex domain by taking the kernel g(·) in (1) to be a smooth polynomial function of degree K [4], [7]:…”
Section: Representation Of Graph Signalsmentioning
confidence: 99%
See 3 more Smart Citations