2016
DOI: 10.1007/s00162-016-0385-x
|View full text |Cite
|
Sign up to set email alerts
|

Parallel data-driven decomposition algorithm for large-scale datasets: with application to transitional boundary layers

Abstract: Many fluid flows of engineering interest, though very complex in appearance, can be approximated by low-order models governed by a few modes, able to capture the dominant behavior (dynamics) of the system. This feature has fueled the development of various methodologies aimed at extracting dominant coherent structures from the flow. Some of the more general techniques are based on data-driven decompositions, most of which rely on performing a singular value decomposition (SVD) on a formulated snapshot (data) m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 63 publications
(21 citation statements)
references
References 21 publications
0
19
0
Order By: Relevance
“…In the case of high resolution numerical simulations the ambient space dimension n (determined by the fineness of the grid) can be in millions, and mere memory fetching and storing is the bottleneck that precludes efficient computation, let alone flop count. For instance, [51], [52] report computation with the dimension n of half billion on a massively parallel computer. Tu and Rowley [40] discuss an implementation of DMD that has been designed to reduce computational work and memory requirements; this approach is also adopted in the library modred [53].…”
Section: Memory Efficient Dmdmentioning
confidence: 99%
“…In the case of high resolution numerical simulations the ambient space dimension n (determined by the fineness of the grid) can be in millions, and mere memory fetching and storing is the bottleneck that precludes efficient computation, let alone flop count. For instance, [51], [52] report computation with the dimension n of half billion on a massively parallel computer. Tu and Rowley [40] discuss an implementation of DMD that has been designed to reduce computational work and memory requirements; this approach is also adopted in the library modred [53].…”
Section: Memory Efficient Dmdmentioning
confidence: 99%
“…The goal of all projected DMD methods is to compute Q (which may or may not be orthonormal) and Q H AQ from the snapshot vectors X N 1 alone without knowledge of the linear mapping A. Some possible choices of Q are : matrix of snapshots X N−1 1 (Schmid and Sesterhenn (2008); Schmid (2010); Rowley et al (2009)), left singular vectors of economy SVD of X N−1 1 (Schmid (2010); Sayadi and Schmid (2016)) and orthonormal matrix from QR factorization of X N−1 1 (Hemati et al (2014)).…”
Section: Projected Dmd Methods: Backgroundmentioning
confidence: 99%
“…When dealing with massive fluid flows that are too large to read into fast memory, the extension to sequential, distributed, and parallel computing might be inevitable [39]. In particular, it might be necessary to distribute the data across processors which have no access to a shared memory to exchange information.…”
Section: Blocked Randomized Algorithmmentioning
confidence: 99%