2019
DOI: 10.48550/arxiv.1905.03709
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Visualizing the Consequences of Climate Change Using Cycle-Consistent Adversarial Networks

Abstract: We present a project that aims to generate images that depict accurate, vivid, and personalized outcomes of climate change using Cycle-Consistent Adversarial Networks (CycleGANs). By training our CycleGAN model on street-view images of houses before and after extreme weather events (e.g. floods, forest fires, etc.), we learn a mapping that can then be applied to images of locations that have not yet experienced these events. This visual transformation is paired with climate model predictions to assess likeliho… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
2

Relationship

3
5

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 16 publications
0
10
0
Order By: Relevance
“…While the nature of our task is specific to our project, we can nonetheless benchmark ClimateGAN against IIT models. In fact, in earlier iterations of our project, we leveraged the CycleGAN (Zhu et al, 2017) architecture in order to achieve initial results (Schmidt et al, 2019), before adopting a more structured approach. Therefore, to be as comprehensive in our benchmarking as possible, we trained the following five models on the same data as the ClimateGAN Painter and used the same test set for the comparison: CycleGAN, MUNIT (Huang et al, 2018), InstaGAN (Mo et al, 2018), InstaGAN using the mask to constrain the transformation to only the masked area (similarly to Eq.…”
Section: Comparablesmentioning
confidence: 99%
“…While the nature of our task is specific to our project, we can nonetheless benchmark ClimateGAN against IIT models. In fact, in earlier iterations of our project, we leveraged the CycleGAN (Zhu et al, 2017) architecture in order to achieve initial results (Schmidt et al, 2019), before adopting a more structured approach. Therefore, to be as comprehensive in our benchmarking as possible, we trained the following five models on the same data as the ClimateGAN Painter and used the same test set for the comparison: CycleGAN, MUNIT (Huang et al, 2018), InstaGAN (Mo et al, 2018), InstaGAN using the mask to constrain the transformation to only the masked area (similarly to Eq.…”
Section: Comparablesmentioning
confidence: 99%
“…In the first case, ML-infused tools to estimate the carbon footprint of individuals and households [Jones and Kammen, 2011] and to model individual behavior with regards to sustainable lifestyle choices and technologies [Carr-Cornish et al, 2011] can be very useful if they are sufficiently accurate and deployed on a large scale. Finally, minimizing psychological distance to the future effects of climate change is a promising way to reduce cognitive bias -in this regard, it is possible to use images generated using Generative Adversarial Networks (GANs) which represent the impacts of extreme events on locations that have personal value to the viewer [Schmidt et al, 2019]. A crucial part of developing ML tools for individuals is, once again, working with multidisciplinary experts in psychology, scientific communication, and user design to ensure that the tools created reach the largest possible audience and maximize their positive impact.…”
Section: Individuals and Societiesmentioning
confidence: 99%
“…To evaluate the effect of LiSS on CycleGAN's performance, we compare it with a baseline CycleGAN from [43] and to the two aforementioned naive training schedules: sequential and parallel. We compare these 4 models on the horse↔zebra dataset on a dataset of flooded↔non-flooded street-level scenes from [37] (the task is to simulate floods). As our goal is to understand how to efficiently leverage a set of given pretext tasks to improve representations, we keep T constant across experiments.…”
Section: Setupmentioning
confidence: 99%
“…In recent years generative unsupervised image-to-image translation (I2IT) has gained tremendous popularity, enabling style transfer [43] and domain adaptation [10], raising awareness about wars [40] and Climate Change [37] and even helping model cloud reflectance fields [38]. I2IT has become a classical problem in computer vision which involves learning a conditional generative mapping from a source domain X to a target domain Y.…”
Section: Introduction 1motivationmentioning
confidence: 99%
See 1 more Smart Citation