2015
DOI: 10.1109/tsp.2015.2393839
|View full text |Cite
|
Sign up to set email alerts
|

Greedy Sparsity-Promoting Algorithms for Distributed Learning

Abstract: Abstract-This paper focuses on the development of novel greedy techniques for distributed learning under sparsity constraints. Greedy techniques have widely been used in centralized systems due to their low computational requirements and at the same time their relatively good performance in estimating sparse parameter vectors/signals. The paper reports two new algorithms in the context of sparsity-aware learning. In both cases, the goal is first to identify the support set of the unknown signal and then to est… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
20
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(20 citation statements)
references
References 37 publications
0
20
0
Order By: Relevance
“…Further improvement on the work in [10] was proposed in [11] to provide a reduced communication cost. Another distributed iterative hard thresholding (DiHaT) algorithm is developed in [4] where observations, measurement matrices and local estimates are exchanged over network to achieve consensus. The DiHaT algorithm provides fast convergence compared to D-LASSO, and provides competitive performance, but at the expense of a high communication cost.…”
Section: A Relation To Prior Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Further improvement on the work in [10] was proposed in [11] to provide a reduced communication cost. Another distributed iterative hard thresholding (DiHaT) algorithm is developed in [4] where observations, measurement matrices and local estimates are exchanged over network to achieve consensus. The DiHaT algorithm provides fast convergence compared to D-LASSO, and provides competitive performance, but at the expense of a high communication cost.…”
Section: A Relation To Prior Workmentioning
confidence: 99%
“…The DiHaT algorithm provides fast convergence compared to D-LASSO, and provides competitive performance, but at the expense of a high communication cost. In [4], an alternate algorithm was also proposed that only uses estimate exchange, but without any theoretical analysis. Our algorithm can be implemented in more general networks with different measurement sizes at different nodes.…”
Section: A Relation To Prior Workmentioning
confidence: 99%
See 3 more Smart Citations