“…As a consequence of this we obtain the following result, stated by F itzpatrick and Phelps [14] under the stronger assumption that the boundary of C be of class C 1 . N otice that this is a localized version of the principal result of Holmes [17].…”
Section: Theorem 63 Suppose Pc Is Directionally Frechet Differentiasupporting
confidence: 59%
“…F irst assume (1), t h at is, ∆* : = &f,x, y ,t Q q for some limit function q. Now by Attouch's characterization of Mosco convergence, (cf.…”
Section: P Roposition 32 Let F Be a Convex C 1λ Function On H Thenmentioning
confidence: 99%
“…Indeed, take h -(^), then which converges weakly but not in norm to ^e 1 . This follows by checking th at for x = £/ ι, the maximum in (7.1) is attained at n -2k -2.…”
The differentiability properties of the metric projection Pc on a closed convex set C in Hilbert space are characterized in terms of the smoothness type of the boundary of C. Our approach is based on using variational type second derivatives as a sufficiently flexible tool to describe the boundary struc ture of the set C with regard to the differentiability of Pc. We extend results by
“…As a consequence of this we obtain the following result, stated by F itzpatrick and Phelps [14] under the stronger assumption that the boundary of C be of class C 1 . N otice that this is a localized version of the principal result of Holmes [17].…”
Section: Theorem 63 Suppose Pc Is Directionally Frechet Differentiasupporting
confidence: 59%
“…F irst assume (1), t h at is, ∆* : = &f,x, y ,t Q q for some limit function q. Now by Attouch's characterization of Mosco convergence, (cf.…”
Section: P Roposition 32 Let F Be a Convex C 1λ Function On H Thenmentioning
confidence: 99%
“…Indeed, take h -(^), then which converges weakly but not in norm to ^e 1 . This follows by checking th at for x = £/ ι, the maximum in (7.1) is attained at n -2k -2.…”
The differentiability properties of the metric projection Pc on a closed convex set C in Hilbert space are characterized in terms of the smoothness type of the boundary of C. Our approach is based on using variational type second derivatives as a sufficiently flexible tool to describe the boundary struc ture of the set C with regard to the differentiability of Pc. We extend results by
“…On Figure 4, a matrix R ∈ M l,m with m = 100 and l = 150 is considered, with singular values chosen to be equally spaced in the interval [1,10]. Three optimization algorithms detailed in [18] (gradient descent with fixed step, conjugate gradient descent, and Newton method) are implemented to find the best rank r = 5 approximation of R, with a random initialization.…”
Section: 2mentioning
confidence: 99%
“…The differentiability of the projection map for arbitrary sets has been studied in [81,1] and more recently in the context of smooth manifolds in [7,26,11] with recent applications in shape optimization [6]. The following theorem reformulates these results in the framework of this article.…”
Any model order reduced dynamical system that evolves a modal decomposition to approximate the discretized solution of a stochastic PDE can be related to a vector field tangent to the manifold of fixed rank matrices. The Dynamically Orthogonal (DO) approximation is the canonical reduced order model for which the corresponding vector field is the orthogonal projection of the original system dynamics onto the tangent spaces of this manifold. The embedded geometry of the fixed rank matrix manifold is thoroughly analyzed. The curvature of the manifold is characterized and related to the smallest singular value through the study of the Weingarten map. Differentiability results for the orthogonal projection onto embedded manifolds are reviewed and used to derive an explicit dynamical system for tracking the truncated Singular Value Decomposition (SVD) of a time-dependent matrix. It is demonstrated that the error made by the DO approximation remains controlled under the minimal condition that the original solution stays close to the low rank manifold, which translates into an explicit dependence of this error on the gap between singular values. The DO approximation is also justified as the dynamical system that applies instantaneously the SVD truncation to optimally constrain the rank of the reduced solution. Riemannian matrix optimization is investigated in this extrinsic framework to provide algorithms that adaptively update the best low rank approximation of a smoothly varying matrix. The related gradient flow provides a dynamical system that converges to the truncated SVD of an input matrix for almost every initial data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.