2010
DOI: 10.1109/tsp.2010.2055862
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Sparse Linear Regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
454
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 465 publications
(458 citation statements)
references
References 32 publications
4
454
0
Order By: Relevance
“…Distributed compressed sensing using convex optimization is addressed in [6], [7]. Using alternating-direction-method-of-multipliers (ADMM), distributed basis pursuit [8] and distributed LASSO (D-LASSO) [9] were realized. D-LASSO is shown to solve the exact convex optimization problem of a centralized scenario in a distributed fashion.…”
Section: A Relation To Prior Workmentioning
confidence: 99%
“…Distributed compressed sensing using convex optimization is addressed in [6], [7]. Using alternating-direction-method-of-multipliers (ADMM), distributed basis pursuit [8] and distributed LASSO (D-LASSO) [9] were realized. D-LASSO is shown to solve the exact convex optimization problem of a centralized scenario in a distributed fashion.…”
Section: A Relation To Prior Workmentioning
confidence: 99%
“…In addition to distributed gradient-like methods, a different type of methods -distributed (augmented) Lagrangian and distributed alternating direction of multipliers methods (ADMM) have been studied, e.g., in [10], [29]- [35]. They have in general more complex iterations than gradient methods, but may have a lower total communication cost, e.g., [30].…”
Section: Brief Comment On the Literaturementioning
confidence: 99%
“…Algorithm D-NG: Node maintains its solution estimate and an auxiliary variable , It uses arbitrary initialization and, for , performs the updates (9) (10) In (9)- (10), is the neighborhood of node (including node ). For the step-size is: (11) and is the sequence from the centralized Nesterov gradient method, [38]: (12) The D-NG algorithm works as follows.…”
Section: Algorithms D-ng and D-nc For Static Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…As a result the complexity and resource consumption will be reduced, and distributed estimation is more flexible and robust to node and/or link failure [2], [3]. Recently, many distributed estimation algorithms have been proposed, such as distributed recursive least square (RLS) [4], distributed least-mean square (LMS) [5], distributed sparse estimation [6], [7], distributed expectation maximization (EM) algorithm [8], distributed Gaussian process regression [9], and distributed variational Bayesian (VB) algorithm [10].…”
Section: Introductionmentioning
confidence: 99%