2006
DOI: 10.1137/050624509
|View full text |Cite
|
Sign up to set email alerts
|

A Quadratically Convergent Newton Method for Computing the Nearest Correlation Matrix

Abstract: The nearest correlation matrix problem is to find a correlation matrix which is closest to a given symmetric matrix in the Frobenius norm. The well studied dual approach is to reformulate this problem as an unconstrained continuously differentiable convex optimization problem. Gradient methods and quasi-Newton methods like BFGS have been used directly to obtain globally convergent methods. Since the objective function in the dual approach is not twice continuously differentiable, these methods converge at best… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
276
0

Year Published

2010
2010
2018
2018

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 243 publications
(276 citation statements)
references
References 45 publications
(71 reference statements)
0
276
0
Order By: Relevance
“…The Newton algorithm of [39] which solves the original nearest correlation matrix problem does not generalize to the fixed elements variant. According to Qi and Sun [41, p. 509], the Newton method that solves the so-called H -weighted nearest correlation matrix problem…”
Section: Fixing Elementsmentioning
confidence: 99%
See 2 more Smart Citations
“…The Newton algorithm of [39] which solves the original nearest correlation matrix problem does not generalize to the fixed elements variant. According to Qi and Sun [41, p. 509], the Newton method that solves the so-called H -weighted nearest correlation matrix problem…”
Section: Fixing Elementsmentioning
confidence: 99%
“…The Newton algorithm [39] for the original nearest correlation matrix problem can be used to compute the solution to the problem with the constraint on λ min . We discuss this modification of the alternating projections method because it further demonstrates the flexibility of the method, which can easily incorporate both the fixed elements constraint and the eigenvalue constraint, unlike the existing Newton methods.…”
Section: Imposing a Lower Bound On The Smallest Eigenvaluementioning
confidence: 99%
See 1 more Smart Citation
“…The W -weighted problem (1.1) has been well studied since Higham (2002) and now there are several good methods for it, including the alternating projection method (Higham, 2002), the gradient and quasi-Newton methods (Malick, 2004;Boyd & Xiao, 2005), the inexact semismooth Newton method combined with the conjugate gradient (CG) solver (Qi & Sun, 2006) and its modified version with several (preconditioned) iterative solvers (Borsdorf, 2007;Borsdorf & Higham, 2009) and the inexact interior-point methods (IPMs) with iterative solvers (Toh et al, 2007;Toh, 2008). All of these methods, except the inexact IPMs, crucially rely on the fact that the projection of a given matrix X ∈ S n onto S n + under the W -weighting, denoted by Π W S n + (X ), which is the optimal solution of the following problem:…”
Section: Of 21 H Qi and D Sunmentioning
confidence: 99%
“…Solving the W -weighted problem (1.1) is equivalent to solving a problem of the following type (cf. Qi & Sun, 2006, Section 4.1): min 1 2 X − G 2 such that diag(W −1/2 X W −1/2 ) = e, X ∈ S n + ,…”
Section: Of 21 H Qi and D Sunmentioning
confidence: 99%