2022
DOI: 10.1002/env.2754
|View full text |Cite
|
Sign up to set email alerts
|

Emulation of greenhouse‐gas sensitivities using variational autoencoders

Abstract: Flux inversion is the process by which sources and sinks of a gas are identified from observations of gas mole fraction. The inversion often involves running a Lagrangian particle dispersion model (LPDM) to generate simulations of the gas movement over a domain of interest. The LPDM must be run backward in time for every gas measurement, and this can be computationally prohibitive. To address this problem, here we develop a novel spatio-temporal emulator for LPDM sensitivities that is built using a convolution… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 42 publications
0
3
0
Order By: Relevance
“…Decomposing the data to reduce the problem's dimensionality is a common method in the Earth sciences, particularly using empirical orthogonal functions (EOFs). However, Cartwright et al (2023) demonstrate that EOFs are not able to retain the structural information of footprints as well as a deep learning alternative, which in turn requires additional complexity, including longer training and predicting times and rotating the footprints to reduce spatial variability. A grid-cell-by-gridcell approach is simpler to design, train and interpret, but it does not implicitly capture the spatial and temporal structure of the output.…”
Section: Formalizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Decomposing the data to reduce the problem's dimensionality is a common method in the Earth sciences, particularly using empirical orthogonal functions (EOFs). However, Cartwright et al (2023) demonstrate that EOFs are not able to retain the structural information of footprints as well as a deep learning alternative, which in turn requires additional complexity, including longer training and predicting times and rotating the footprints to reduce spatial variability. A grid-cell-by-gridcell approach is simpler to design, train and interpret, but it does not implicitly capture the spatial and temporal structure of the output.…”
Section: Formalizationmentioning
confidence: 99%
“…A small number of methods have been developed to efficiently approximate LPDM footprints, mostly using interpolation or smoothing: Fasoli et al (2018) proposed a method to run the LPDM with a small number of particles and use kernel density estimations to infer the full footprint, Roten et al (2021) suggested a method to spatially interpolate footprints using nonlinear-weighted averaging of nearby plumes, and Cartwright et al (2023) developed an emulator that is capable of reconstructing LPDM footprints given a "known" set of nearby footprints, using a convolutional variational auto-encoder for dimensionality reduction and a Gaussian process emulator for prediction. Though more computationally efficient than LPDMs alone, these methods still require running the LPDM a number of times for new predictions.…”
Section: Introductionmentioning
confidence: 99%
“…Most of the contributions to Part 2 of the special issue develop and apply such models: Yan, Cantoni, Field, Treble, and Mills Flemming (2023) consider a spatio‐temporal application in fisheries science that involves estimating the maturity of fish stock; Nie, Wang, and Cao (2023) apply functional data analysis to the problem of sub‐region estimation for daily bike‐share rentals; Laroche, Olteanu, and Rossi (2023) examine irregularly sampled left‐censored pesticide concentration data from France, developing new methodology for modeling spatio‐temporal heterogeneity; while Mukherjee, Bagozzi, and Chatterjee (2023) use spatio‐temporal fields to model climate and social instability interactions, as a framework for studying conflict. Several contributions also consider the problem of spatial/spatio‐temporal interpolation or emulation: Granville, Woolford, Dean, Boychuk, and McFayden (2023) tackle the problem of interpolating spatial data for generating a fire index for wildfires in Ontario, Canada, while Cartwright, Zammit‐Mangion, and Deutscher (2023) develop a spatio‐temporal emulator based on convolutional variational autoencoders. Several contributed opinion pieces also expand on the challenges in this area: Scott (2023) discusses the ‘digital earth’ concept and the challenges of spatially or temporally sparse data; Blair and Henrys (2023) consider the idea of ‘digital twins’ for making sense of complex, heterogeneous spatio‐temporal data; and Sain (2023) discusses data science and risk quantification in a complex environment.…”
Section: Application and Development Of Spatio‐temporal Modelsmentioning
confidence: 99%