Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2016
DOI: 10.1137/15m1042899
|View full text |Cite
|
Sign up to set email alerts
|

An Accelerated Greedy Missing Point Estimation Procedure

Abstract: Abstract. Model reduction via Galerkin projection fails to provide considerable computational savings if applied to general nonlinear systems. This is because the reduced representation of the state vector appears as an argument to the nonlinear function, whose evaluation remains as costly as for the full model. Masked projection approaches, such as the missing point estimation and the (discrete) empirical interpolation method, alleviate this effect by evaluating only a small subset of the components of a give… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 58 publications
(38 citation statements)
references
References 16 publications
0
38
0
Order By: Relevance
“…The selection of the vector components (i.e., the definition of the mask matrix P) is done iteratively by applying an accelerated greedy missing point estimation procedure which improves the exhaustive greedy point selection algorithm by exploiting a rank-one SVD update strategy. 32 The original greedy algorithm minimizes an error indicator by sequentially looping over all the entries of the unsteady residual vector resulting in a very high computational cost of complexity O(N r 3 ). The error indicator is related to the inverse of the minimum singular value σ min (P T s+1 U r ) associated to the POD modes U r projected onto the iteratively populated mask matrix P s+1 ∈ R N ×(s+1) , being s ≥ r the number of already selected indices.…”
Section: Hyper Reduction Methodsmentioning
confidence: 99%
“…The selection of the vector components (i.e., the definition of the mask matrix P) is done iteratively by applying an accelerated greedy missing point estimation procedure which improves the exhaustive greedy point selection algorithm by exploiting a rank-one SVD update strategy. 32 The original greedy algorithm minimizes an error indicator by sequentially looping over all the entries of the unsteady residual vector resulting in a very high computational cost of complexity O(N r 3 ). The error indicator is related to the inverse of the minimum singular value σ min (P T s+1 U r ) associated to the POD modes U r projected onto the iteratively populated mask matrix P s+1 ∈ R N ×(s+1) , being s ≥ r the number of already selected indices.…”
Section: Hyper Reduction Methodsmentioning
confidence: 99%
“…We note that the approach described here is called tensorial ROM, since the model predetermined terms (especially those corresponding to nonlinear terms) are computed in advance as tensors during an offline stage, while the online solution of Equation (18) scales with R 3 . Alternatively, the model terms can be calculated online while minimizing the cost of computing the nonlinear terms through approximating approaches like discrete empirical interpolation method (DEIM) [92,93], gappy POD [60,94], and missing point estimation (MPE) [95,96]. An overview of these techniques can be found in [97,98].…”
Section: Galerkin Projectionmentioning
confidence: 99%
“…As already mentioned in §1.2, the PBDW framework [43] allows selecting s ≥ r approximation points, and we will proceed with the general case of rectangular S T U r . The oversampling has been successfully used in the related context of missing point estimation, see [3] [49], [70], [71]. We adopt the following notation.…”
Section: Short and Fat Matricesmentioning
confidence: 99%
“…As can be seen above, when the DEIM operator is generalized to the setting s = r only the projection property or the interpolation property is retained but not both simultaneously. For related developments, see [43], [71], [13].…”
Section: Short and Fat Matricesmentioning
confidence: 99%