2018
DOI: 10.1103/physrevd.97.103515
|View full text |Cite
|
Sign up to set email alerts
|

Non-Gaussian information from weak lensing data via deep learning

Abstract: Weak lensing maps contain information beyond two-point statistics on small scales. Much recent work has tried to extract this information through a range of different observables or via nonlinear transformations of the lensing field. Here we train and apply a 2D convolutional neural network to simulated noiseless lensing maps covering 96 different cosmological models over a range of {Ωm, σ8}. Using the area of the confidence contour in the {Ωm, σ8} plane as a figure-of-merit, derived from simulated convergence… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
104
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 115 publications
(105 citation statements)
references
References 54 publications
1
104
0
Order By: Relevance
“…The convolutional neural network (CNN) extracts the characterising features directly from the pixel data of the training mass maps. We have experimented with a number of architectures, including classic topologies which implement a large number of 3 × 3 convolutions inspired by VGG-net (Simonyan & Zisserman 2014), as well as architectures presented in Ravanbakhsh et al (2017) and Gupta et al (2018). The model that worked best for our purposes is almost exclusively based on the Inception layers first presented in Szegedy et al (2014).…”
Section: Convolutional Neural Networkmentioning
confidence: 99%
“…The convolutional neural network (CNN) extracts the characterising features directly from the pixel data of the training mass maps. We have experimented with a number of architectures, including classic topologies which implement a large number of 3 × 3 convolutions inspired by VGG-net (Simonyan & Zisserman 2014), as well as architectures presented in Ravanbakhsh et al (2017) and Gupta et al (2018). The model that worked best for our purposes is almost exclusively based on the Inception layers first presented in Szegedy et al (2014).…”
Section: Convolutional Neural Networkmentioning
confidence: 99%
“…One approach is to apply a standard 2D CNN to a grid discretisation of the sphere [20][21][22]. An alternative is to divide the sphere into small chunks and project those on flat 2D surfaces [9,11,12,23].…”
Section: Introductionmentioning
confidence: 99%
“…Due to nonlinearities on small scales, the traditional analysis with two-point statistics does not fully capture all the underlying information [5]. Multiple inference methods were proposed to extract more details based on higher order statistics [6, 7], peak statistics [8][9][10][11][12][13], Minkowski functionals [14-16] and recently convolutional neural networks (CNN) [17,18]. Here we present an improved convolutional neural network that gives significantly better estimates of Ω m and σ 8 cosmological parameters from simulated convergence maps than the state of art methods and also is free of systematic bias.…”
mentioning
confidence: 99%
“…The proposed scheme is even more accurate than the neural network on high-resolution noiseless maps. With shape noise and lower resolution its relative advantage deteriorates, but it remains more accurate than peak counting.Following the idea and using the simulation data from a recent study [18] we created an improved convolutional neural network (CNN) architecture (see details in the Methods) which is able to recover cosmological parameters more accurately from simulated weak lensing maps. The input of the network is a set of mock convergence (κ) maps generated by ray-tracing n-body simulations with 96 different values for the matter density Ω m and the scale of the initial perturbations normalized at the late Universe, σ 8 (see [18] and [19] for details of the weak lensing map generation), the outputs of the network were the predicted cosmological parameters.…”
mentioning
confidence: 99%