2012
DOI: 10.3182/20120711-3-be-2027.00310
|View full text |Cite
|
Sign up to set email alerts
|

An ADMM Algorithm for a Class of Total Variation Regularized Estimation Problems

Abstract: We present an alternating augmented Lagrangian method for convex optimization problems where the cost function is the sum of two terms, one that is separable in the variable blocks, and a second that is separable in the difference between consecutive variable blocks. Examples of such problems include Fused Lasso estimation, total variation denoising, and multiperiod portfolio optimization with transaction costs. In each iteration of our method, the first step involves separately optimizing over each variable b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
129
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 197 publications
(129 citation statements)
references
References 15 publications
0
129
0
Order By: Relevance
“…Although known by some statisticians, this method seems to be largely ignored, at least in the signal processing community. Evidence of this is that iterative methods are regularly proposed for 1D TV denoising [32]- [34]. To understand the principle of the taut string method, define the sequence of running sums r by r …”
Section: Introductionmentioning
confidence: 99%
“…Although known by some statisticians, this method seems to be largely ignored, at least in the signal processing community. Evidence of this is that iterative methods are regularly proposed for 1D TV denoising [32]- [34]. To understand the principle of the taut string method, define the sequence of running sums r by r …”
Section: Introductionmentioning
confidence: 99%
“…Our approach is closely related to a technique called total-variation denoising from the image-processing literature (Rudin et al, 1992), which is also known in statistics as the fused lasso (Tibshirani et al, 2005). We draw heavily on recent work about computationally efficient estimation for this class of optimization problems, including Tibshirani and Taylor (2011), Ramdas and Tibshirani (2014), Wahlberg et al (2012), Tansey et al (2014), Wang et al (2014), and Tansey and Scott (2015). Specifically, we use the algorithm from Tansey and Scott (2015) (which is itself strongly motivated by the discussion in Wang et al, 2014) to solve a series of optimization problems that combine a binomial likelihood together with a total-variation penalty over the nodes of an undirected graph.…”
Section: Statistical Backgroundmentioning
confidence: 99%
“…Standard software for convex optimization can be used to solve (1). However, it is possible to use the special structure to derive more efficient optimization algorithms for (1), see [2], [14].…”
Section: Problem Statementmentioning
confidence: 99%
“…Here we have used that the sign function defined by (14) only depends on the sign of its argument and (14) implies that the sign is not changed by the transformation σ 2 t = −1/(2η t ).…”
Section: Proof: First Notice Thatmentioning
confidence: 99%
See 1 more Smart Citation