2011
DOI: 10.5194/gmd-4-435-2011
|View full text |Cite
|
Sign up to set email alerts
|

Automated continuous verification for numerical simulation

Abstract: Abstract. Verification is a process crucially important for the final users of a computational model: code is useless if its results cannot be relied upon. Typically, verification is seen as a discrete event, performed once and for all after development is complete. However, this does not reflect the reality that many geoscientific codes undergo continuous development of the mathematical model, discretisation and software implementation. Therefore, we advocate that in such cases verification must be continuous… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
30
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 33 publications
(31 citation statements)
references
References 37 publications
1
30
0
Order By: Relevance
“…Note that the time step is adaptive and will therefore decrease along with the element size such that this test checks both spatial and temporal convergence. This verifies that the model equations are implemented correctly (Farrell et al, 2011). A qualitative comparison shows that the solution matches the analytical solution very well (Fig.…”
Section: Forward Model Verificationsupporting
confidence: 53%
“…Note that the time step is adaptive and will therefore decrease along with the element size such that this test checks both spatial and temporal convergence. This verifies that the model equations are implemented correctly (Farrell et al, 2011). A qualitative comparison shows that the solution matches the analytical solution very well (Fig.…”
Section: Forward Model Verificationsupporting
confidence: 53%
“…Varying amounts of software testing are conducted throughout the cycle, but formal code verification practices (e.g., see D'Silva et al, 2008) are only recently starting to be considered for climate model development (Clune and Rood, 2011;Farrell et al, 2011). Nonetheless, the concentration on sound science, as opposed to software correctness, has led to climate models that contain fewer software defects than other comparably sized projects (Pipitone and Easterbrook, 2012).…”
Section: D Lucas Et Al: Failure Analysis Of Climate Simulation Cmentioning
confidence: 99%
“…Testing is often a secondary consideration to new feature implementation, so it is important that extension of the testing suite is as simple as possible. This can simply be run at the time of a new installation, following the upgrade of required libraries or the operating system, or routinely as part of a commit-hook buildbot with dedicated resources to continuously verify new code pushed to a Shingle development code repository (see, for example, Farrell et al, 2011). Being built on standard libraries, it could further form part of an automated wider system framework validation, for the above climate intercomparison projects, for example, reproducing the entire process from initialisation to post-processing, on demand.…”
Section: Continuous Verificationmentioning
confidence: 99%