2020
DOI: 10.1002/fld.4843
|View full text |Cite
|
Sign up to set email alerts
|

Low‐rank solution of an optimal control problem constrained by random Navier‐Stokes equations

Abstract: Many problems in computational science and engineering are simultaneously characterized by the following challenging issues: uncertainty, nonlinearity, nonstationarity and high dimensionality. Existing numerical techniques for such models would typically require considerable computational and storage resources. This is the case, for instance, for an optimization problem governed by time-dependent Navier-Stokes equations with uncertain inputs. In particular, the stochastic Galerkin finite element method often l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(12 citation statements)
references
References 72 publications
(230 reference statements)
0
12
0
Order By: Relevance
“…We address this issue by applying a lowrank variant of generalized minimal residual (GMRES) [30] with suitable preconditioners, which reduce both the storage requirements and the computational complexity by exploiting a Kronecker-product structure of system matrices; see, e.g., [31,32,33]. Low-rank approximation for the optimal control problems with uncertain inputs have been also studied in [14,34,35,36] for unconstrained control problems and in [36] for control constraint problems. In the aforementioned studies, randomness is generally defined in the diffusion parameter however we here consider the randomness both in diffusion or convection parameters by using also discontinuous Galerkin method in spatial domain.…”
Section: Pde-constraint Optimization Problems With Uncertainty Have B...mentioning
confidence: 99%
“…We address this issue by applying a lowrank variant of generalized minimal residual (GMRES) [30] with suitable preconditioners, which reduce both the storage requirements and the computational complexity by exploiting a Kronecker-product structure of system matrices; see, e.g., [31,32,33]. Low-rank approximation for the optimal control problems with uncertain inputs have been also studied in [14,34,35,36] for unconstrained control problems and in [36] for control constraint problems. In the aforementioned studies, randomness is generally defined in the diffusion parameter however we here consider the randomness both in diffusion or convection parameters by using also discontinuous Galerkin method in spatial domain.…”
Section: Pde-constraint Optimization Problems With Uncertainty Have B...mentioning
confidence: 99%
“…As a result, the lowrank method achieves greater computational savings for the problems with larger correlation length. Next, we examine the performances of the low-rank approximation method for varying CoV , which is defined in (3). In this experiment, we fix the value of Re 0 = 100 and the variance of the random σ ν is controlled.…”
Section: Computational Costsmentioning
confidence: 99%
“…In particular, we consider a random viscosity affinely dependent on a set of random variables as suggested in [19] (and in [23], which considers a gPC approximation of the lognormally distributed viscosity). The stochastic Galerkin formulation of the stochastic Navier-Stokes equations is also considered in [3], which studies an optimal control problem constrained by the stochastic Navier-Stokes problem and computes an approximate solution using a low-rank tensor-train decomposition [17]. Related work [26] extends a Proper Generalized Decomposition method [16] for the stochastic Navier-Stokes equations, where a low-rank approximate solution is built from successively computing rank-one approximations.…”
mentioning
confidence: 99%
“…A general tensor X ∈ C n1ו••×n d requires d j=1 n j degrees of freedom to store, which scales exponentially with the order d. Therefore, it is often essential to approximate or represent large tensors with data-sparse formats so that storing and computing with them is feasible. The tensor-train (TT) decomposition [31] is a tensor format with a storage cost that can scale linearly in n j and d. The TT format is used in molecular simulations [33], high-order correlation functions [26], and partial differential equation (PDE) constrained optimization [4,14]. In practice, one tries to replace X by a tensor X with a data-sparse TT format such that…”
mentioning
confidence: 99%