2022
DOI: 10.3847/1538-4357/ac6de4
|View full text |Cite
|
Sign up to set email alerts
|

GIGA-Lens: Fast Bayesian Inference for Strong Gravitational Lens Modeling

Abstract: We present GIGA-Lens: a gradient-informed, GPU-accelerated Bayesian framework for modeling strong gravitational lensing systems, implemented in TensorFlow and JAX. The three components, optimization using multistart gradient descent, posterior covariance estimation with variational inference, and sampling via Hamiltonian Monte Carlo, all take advantage of gradient information through automatic differentiation and massive parallelization on graphics processing units (GPUs). We test our pipeline on a large set o… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 17 publications
(12 citation statements)
references
References 96 publications
0
9
0
Order By: Relevance
“…The latter is the best match to our networks, and is comparable in performance, as mentioned in S21a. There is also currently a lot of work going into automated modeling without machine learning (e.g., Nightingale et al 2018Nightingale et al , 2021Rojas et al 2022;Savary et al 2022;Ertl et al 2023;Etherington et al 2022;Gu et al 2022;Schmidt et al 2023), which typically performs better than neural networks but requires significantly longer run times of hours to days. We refer to Schuldt et al (2022) for a direct comparison between the network presented here and traditionally obtained models for real HSC lenses.…”
Section: Network Results and Performancementioning
confidence: 99%
“…The latter is the best match to our networks, and is comparable in performance, as mentioned in S21a. There is also currently a lot of work going into automated modeling without machine learning (e.g., Nightingale et al 2018Nightingale et al , 2021Rojas et al 2022;Savary et al 2022;Ertl et al 2023;Etherington et al 2022;Gu et al 2022;Schmidt et al 2023), which typically performs better than neural networks but requires significantly longer run times of hours to days. We refer to Schuldt et al (2022) for a direct comparison between the network presented here and traditionally obtained models for real HSC lenses.…”
Section: Network Results and Performancementioning
confidence: 99%
“…Various studies are investigating this approach for groundbased images and also future Euclid-like and LSST-like images (e.g., Pearson et al 2019;Schuldt et al 2021;Pearson et al 2021), and for its compatibility with the hierarchical inference framework introduced above (Wagner-Carena et al 2021;Park et al 2021). In addition, the use of Graphics Processing Units (GPU) can save substantial computational time (Gu et al 2022). The challenges are to validate these new modeling methods on real lenses since these methods have so far been demonstrated only on mock lenses.…”
Section: Deflector-modeling Challengesmentioning
confidence: 99%
“…One possibility is to automate the modeling procedure while still relying on Bayesian inference such as MCMC sampling (e.g., Nightingale et al 2018;Rojas et al 2022;Savary et al 2022;Ertl et al 2022;Etherington et al 2022;Gu et al 2022;Schmidt et al 2023), which reduces the user input dramatically, resulting in an overall runtime on the order of days. A further speed-up can be achieved by using Graphical Processing Units (GPUs, e.g., Gu et al 2022).…”
Section: Introductionmentioning
confidence: 99%
“…One possibility is to automate the modeling procedure while still relying on Bayesian inference such as MCMC sampling (e.g., Nightingale et al 2018;Rojas et al 2022;Savary et al 2022;Ertl et al 2022;Etherington et al 2022;Gu et al 2022;Schmidt et al 2023), which reduces the user input dramatically, resulting in an overall runtime on the order of days. A further speed-up can be achieved by using Graphical Processing Units (GPUs, e.g., Gu et al 2022). Another option is to use machine learning (Hezaveh et al 2017;Perreault Levasseur et al 2017;Morningstar et al 2018Morningstar et al , 2019Pearson et al 2019Pearson et al , 2021Schuldt et al 2021b, 2023, hereafter S21b andS23, respectively).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation