2022
DOI: 10.48550/arxiv.2207.01073
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Pixelated Reconstruction of Gravitational Lenses using Recurrent Inference Machines

Abstract: Modeling strong gravitational lenses in order to quantify the distortions in the images of background sources and to reconstruct the mass density in the foreground lenses has traditionally been a difficult computational challenge. As the quality of gravitational lens images increases, the task of fully exploiting the information they contain becomes computationally and algorithmically more difficult. In this work, we use a neural network based on the Recurrent Inference Machine (RIM) to simultaneously reconstr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 43 publications
0
6
0
Order By: Relevance
“…One possibility for circumventing this issue is to have a separate approximate inference of the nuisance parameters and draw the simulations for the density estimation stage from these posteriors. This may be possible in a scenario where a separate network is trained to infer a posterior distribution for the nuisance parameters (e.g., a generative model for the background source conditioned on the data; see Adam et al 2022aAdam et al , 2022b. As mentioned earlier, alternative implicit likelihood inference frameworks that allow implicit marginalization over nuisance parameters (e.g., likelihood ratio methods) are generally only practical in low-dimensional spaces.…”
Section: Discussionmentioning
confidence: 99%
“…One possibility for circumventing this issue is to have a separate approximate inference of the nuisance parameters and draw the simulations for the density estimation stage from these posteriors. This may be possible in a scenario where a separate network is trained to infer a posterior distribution for the nuisance parameters (e.g., a generative model for the background source conditioned on the data; see Adam et al 2022aAdam et al , 2022b. As mentioned earlier, alternative implicit likelihood inference frameworks that allow implicit marginalization over nuisance parameters (e.g., likelihood ratio methods) are generally only practical in low-dimensional spaces.…”
Section: Discussionmentioning
confidence: 99%
“…In contrast to that, we are able to model a sample of dozens of lenses with our automated traditional pipeline to better accuracy and we can also evaluate the quality of the fit in terms of a χ 2 , which is not possible for the network output. The glee tools.py code enables us to further refine the models obtained with our fully automated procedure or also other dedicated automated modeling codes (e.g., Hezaveh et al 2017;Perreault Levasseur et al 2017;Nightingale et al 2018Nightingale et al , 2021aPearson et al 2019Pearson et al , 2021Adam et al 2022;Ertl et al 2022;Etherington et al 2022;Schmidt et al 2023). The combination of all three codes enables us to handle different sample sizes of lenses, and thus takes us a huge step forward in handling the newly detected lenses in current and upcoming wide-field imaging surveys such as LSST and Euclid.…”
Section: Discussionmentioning
confidence: 99%
“…More recently, Vernardos & Koopmans (2022) used the gravitational imaging technique to reconstruct the perturbing field of a population of subhalos and recovered its powerspectrum properties, especially the slope, remarkably well. Several studies have also employed deep learning techniques to infer the presence of subhalo populations (e.g., Brehmer et al 2019;Diaz Rivero & Dvorkin 2020;Varma et al 2020;Coogan et al 2020;Vernardos et al 2020;Ostdiek et al 2022;Adam et al 2022). While it is still unclear if these methods are strongly limited by the simplifying assumptions on their training data (e.g.…”
Section: Introductionmentioning
confidence: 99%