2016
DOI: 10.1109/tsp.2016.2602809
|View full text |Cite
|
Sign up to set email alerts
|

Learning Laplacian Matrix in Smooth Graph Signal Representations

Abstract: The construction of a meaningful graph plays a crucial role in the success of many graph-based representations and algorithms for handling structured data, especially in the emerging field of graph signal processing. However, a meaningful graph is not always readily available from the data, nor easy to define depending on the application domain. In particular, it is often desirable in graph signal processing applications that a graph is chosen such that the data admit certain regularity or smoothness on the gr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
519
0
1

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 524 publications
(540 citation statements)
references
References 54 publications
3
519
0
1
Order By: Relevance
“…The reason why this occurs is because the graph tightly matches the actual signal f smoothness, since Finally, we test the proposed graph learning algorithm on a random graph, built by N = 25 nodes belonging to P communities, spatially distributed in equiangularly spaced circular regions with centers on the unit circle and unitary radius; we set the intra-community edge probability 0.8 and inter-community edge probability 0.2 and nonzero weights uniformly distributed in [0, 1]; the signal samples on nodes within communities are independently uniformly distributed in the community dependent ranges [p, p + 1] p = 1, · · ·P . We compare the graph learning results with those obtained by using the method GL-SigRep 2 in [1]. Specifically, Table III reports the F-measure, defined as the harmonic mean of the recall and precision performances of the proposed method and that in [1], averaged over 20 runs and for different values of P .…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…The reason why this occurs is because the graph tightly matches the actual signal f smoothness, since Finally, we test the proposed graph learning algorithm on a random graph, built by N = 25 nodes belonging to P communities, spatially distributed in equiangularly spaced circular regions with centers on the unit circle and unitary radius; we set the intra-community edge probability 0.8 and inter-community edge probability 0.2 and nonzero weights uniformly distributed in [0, 1]; the signal samples on nodes within communities are independently uniformly distributed in the community dependent ranges [p, p + 1] p = 1, · · ·P . We compare the graph learning results with those obtained by using the method GL-SigRep 2 in [1]. Specifically, Table III reports the F-measure, defined as the harmonic mean of the recall and precision performances of the proposed method and that in [1], averaged over 20 runs and for different values of P .…”
Section: Resultsmentioning
confidence: 99%
“…We compare the graph learning results with those obtained by using the method GL-SigRep 2 in [1]. Specifically, Table III reports the F-measure, defined as the harmonic mean of the recall and precision performances of the proposed method and that in [1], averaged over 20 runs and for different values of P . The ID-LD adjacency matrix well captures the signal smoothness, well performing w.r.t.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations