2014
DOI: 10.1063/1.4866813
|View full text |Cite
|
Sign up to set email alerts
|

Estimating uncertainties in statistics computed from direct numerical simulation

Abstract: 1 Rigorous assessment of uncertainty is crucial to the utility of DNS results. Uncertainties in the computed statistics arise from two sources: finite statistical sampling and the discretization of the Navier-Stokes equations. Due to the presence of nontrivial sampling error, standard techniques for estimating discretization error (such as Richardson extrapolation) fail or are unreliable. This work provides a systematic and unified approach for estimating these errors. First, a sampling error estimator that ac… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
95
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 121 publications
(102 citation statements)
references
References 39 publications
4
95
0
Order By: Relevance
“…Since statistical quantities are often provided as outputs of a DNS, the uncertainties associated to these statistics come from discretization errors and sampling errors. Recently, a discussion on this matter was conducted by Oliver et al [21] , where it was stated that is not common in the DNS literature to estimate the uncertainties of the statistical quantities. One of the main difficulties is related to the fact that samples used to generate DNS statistics are originated from a time history and spatial field that are generally not independent and the procedure used to mitigate this fact, namely taking samples far apart in time and space, does not eliminate it.…”
Section: Introductionmentioning
confidence: 98%
“…Since statistical quantities are often provided as outputs of a DNS, the uncertainties associated to these statistics come from discretization errors and sampling errors. Recently, a discussion on this matter was conducted by Oliver et al [21] , where it was stated that is not common in the DNS literature to estimate the uncertainties of the statistical quantities. One of the main difficulties is related to the fact that samples used to generate DNS statistics are originated from a time history and spatial field that are generally not independent and the procedure used to mitigate this fact, namely taking samples far apart in time and space, does not eliminate it.…”
Section: Introductionmentioning
confidence: 98%
“…The standard uncertainties δC f,0 and δC f are evaluated with the procedure described by Oliver et al (2014), by calculating an integral timescale via an autoregressive method. This is slightly different from the strategy employed by Gatti & Quadrio (2013), who estimated the timescale differently, and it is interesting to note that the two methods end up with essentially the same result.…”
Section: Domain Size and Uncertaintymentioning
confidence: 99%
“…While such numerical and modeling errors have drawn the interest of many researchers, very little attention has been paid to other sources of uncertainties, such as insufficiently large computational domains, improper inflow conditions, finite initial transient or sampling errors interfering with results of LES [7,9,28,46]. Furthermore, reference data reported in the literature of the same configuration can differ significantly [63] from each other.…”
mentioning
confidence: 87%
“…For further discussion, please refer to, e.g., [47]. In addition, it should be noted here, that in the case of modest sample size, it is advantageous to estimate ρ i (s) by fitting an autoregressive time series models to reduce spurious oscillations as described in [46]. By means of the central limit theorem, the standard deviation of the estimator of the mean velocity U i,mean can be determined as σ t U i,mean = σ t U i / √ N t .…”
Section: Methodsmentioning
confidence: 99%