2015
DOI: 10.1007/s40305-015-0078-y
|View full text |Cite
|
Sign up to set email alerts
|

An Alternating Direction Approximate Newton Algorithm for Ill-Conditioned Inverse Problems with Application to Parallel MRI

Abstract: An alternating direction approximate Newton (ADAN) method is developed for solving inverse problems of the form min{φ(Bu) + (1/2) Au − f 2 2 }, where φ is convex and possibly nonsmooth, and A and B are matrices. Problems of this form arise in image reconstruction where A is the matrix describing the imaging device, f is the measured data, φ is a regularization term, and B is a derivative operator. The proposed algorithm is designed to handle applications where A is a large dense, ill-conditioned matrix. The al… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
15
0
1

Year Published

2015
2015
2022
2022

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(17 citation statements)
references
References 24 publications
(37 reference statements)
0
15
0
1
Order By: Relevance
“…But it should be mentioned that it is significantly different from the so-called linearized ADMM in the literature (e.g., [8,35,36]), which aims at linearizing the quadratic terms in ADMM's subproblems with a sufficiently large proximal parameter (e.g., it should be greater than β · A T i A i if the x i -subproblem in (3.1) is considered) and thus alleviating the linearized subproblems. In other words, the proximal parameters in current linearized ADMM literature are dependent on the involved matrices of the corresponding quadratic terms and they may need to be sufficiently large to ensure the convergence if the matrices happen to be ill-conditioned, see, e.g., [16]. In (3.1), however, the proximal parameters τ 1 and τ 2 are just constants independent of the matrices.…”
Section: Algorithm 1: a Splitting Version Of The Block-wise Admm For mentioning
confidence: 99%
“…But it should be mentioned that it is significantly different from the so-called linearized ADMM in the literature (e.g., [8,35,36]), which aims at linearizing the quadratic terms in ADMM's subproblems with a sufficiently large proximal parameter (e.g., it should be greater than β · A T i A i if the x i -subproblem in (3.1) is considered) and thus alleviating the linearized subproblems. In other words, the proximal parameters in current linearized ADMM literature are dependent on the involved matrices of the corresponding quadratic terms and they may need to be sufficiently large to ensure the convergence if the matrices happen to be ill-conditioned, see, e.g., [16]. In (3.1), however, the proximal parameters τ 1 and τ 2 are just constants independent of the matrices.…”
Section: Algorithm 1: a Splitting Version Of The Block-wise Admm For mentioning
confidence: 99%
“…In recent years, Newton-type methods have been combined with the forward-backward splitting (FBS) algorithm to accelerate the speed of the original FBS algorithm. See, for example, [24][25][26]. Argyriou et al [27] considered the following convex optimization problem:…”
Section: Introductionmentioning
confidence: 99%
“…In Hong-Chao Zhang's paper [6], he uses the Alternating Direction Approximate Newton method (ADAN) based on Alternating Direction Method (ADMM) which originaly in [7] to solve (1). He employs the BB approximation to increase the iterations.…”
Section: Introductionmentioning
confidence: 99%
“…In many applications, the optimization problems in ADMM are either easily resolvable, since ADMM iterations can be performed at a low computational cost. Besides, com-bine different Newton-based methods with ADMM have become a trend, see [6] [8] [9], since those methods may achieve the high convergent speed.…”
Section: Introductionmentioning
confidence: 99%