2021
DOI: 10.1017/jog.2021.120
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning speeds up ice flow modelling by several orders of magnitude

Abstract: This paper introduces the Instructed Glacier Model (IGM) – a model that simulates ice dynamics, mass balance and its coupling to predict the evolution of glaciers, icefields or ice sheets. The novelty of IGM is that it models the ice flow by a Convolutional Neural Network, which is trained from data generated with hybrid SIA + SSA or Stokes ice flow models. By doing so, the most computationally demanding model component is substituted by a cheap emulator. Once trained with representative data, we demonstrate t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
34
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 30 publications
(55 citation statements)
references
References 60 publications
0
34
0
Order By: Relevance
“…Following Jouvet and others (2021), our ice flow emulator predicts vertically average and surface horizontal velocity from ice thickness, surface slope and ice flow strength parameter :where input and output are two-dimensional fields, which are defined over the discretized computational domain (or subparts) of size N X × N Y . The above emulator is the one introduced by Jouvet and others (2021) but augmented with the surface velocity in the output variable set as necessary to define a misfit with observations.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…Following Jouvet and others (2021), our ice flow emulator predicts vertically average and surface horizontal velocity from ice thickness, surface slope and ice flow strength parameter :where input and output are two-dimensional fields, which are defined over the discretized computational domain (or subparts) of size N X × N Y . The above emulator is the one introduced by Jouvet and others (2021) but augmented with the surface velocity in the output variable set as necessary to define a misfit with observations.…”
Section: Methodsmentioning
confidence: 99%
“…Here, we used the network architecture from Jouvet and others (2021), which was found to be optimal in terms of model fidelity to number of parameters. At training, we minimize the sole L 1 loss function using a stochastic gradient method – the Adam optimizer (Kingma and Ba, 2014) – with a learning rate of 0.0001, a batch size of 64 and 200 epochs (or iterations) to reach convergence.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations