2018
DOI: 10.1109/tsipn.2017.2710905
|View full text |Cite
|
Sign up to set email alerts
|

Greedy Sparse Learning Over Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 39 publications
0
8
0
Order By: Relevance
“…The pruned BPDN (pNBPDN) is more close to distributed greedy algorithms that work with the same system setup. Examples of the greedy algorithms are network greedy pursuit (NGP) [22] and distributed hard thresholding pursuit (DHTP) [31]. We mention that NGP and DHTP have RIP conditions δ 3s (A l ) < 0.362 and δ 3s (A l ) < 0.333, respectively.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The pruned BPDN (pNBPDN) is more close to distributed greedy algorithms that work with the same system setup. Examples of the greedy algorithms are network greedy pursuit (NGP) [22] and distributed hard thresholding pursuit (DHTP) [31]. We mention that NGP and DHTP have RIP conditions δ 3s (A l ) < 0.362 and δ 3s (A l ) < 0.333, respectively.…”
Section: Discussionmentioning
confidence: 99%
“…Based on subspace pursuit [19] and CoSamp [20] algorithms used for centralized sparse learning, a set of distributed algorithms are proposed in [14], [21] that provide a high computational advantage. With the same motivation of low computational complexity, a recent work on designing distributed greedy pursuit algorithms is [22].…”
Section: B Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore we focus on developing distributed greedy sparse learning algorithms that exchange intermediate estimates of x or relevant parameters over network. Relevant past works in this direction are [11], [12], [13] where we proposed some rules for information exchange over network and developed new greedy algorithms for various signal models. In this current article, we investigate an existing distributed greedy algorithm in the literature [14].…”
Section: Introductionmentioning
confidence: 99%
“…However, their algorithm relies on the knowledge of the data of each sensor, thus the performance decays when the number of sensor increases. In [25] a distributed greedy algorithm for sparse learning is proposed, the algorithm is not based in a consensus scheme but instead it is designed to achieve efficiency through cooperation and information exchange. In [6] a framework for distributed minimization of non-convex functions is proposed, such framework is based on differences of gradients using successive convex approximations and the use of consensus to distribute the computation among network nodes, each node solves a local convex approximation problem following by local averaging operations.…”
Section: Introductionmentioning
confidence: 99%