2019
DOI: 10.1093/mnras/stz3056
|View full text |Cite
|
Sign up to set email alerts
|

Photometry of high-redshift blended galaxies using deep learning

Abstract: The new generation of deep photometric surveys requires unprecedentedly precise shape and photometry measurements of billions of galaxies to achieve their main science goals. At such depths, one major limiting factor is the blending of galaxies due to line-of-sight projection, with an expected fraction of blended galaxies of up to 50%. Current deblending approaches are in most cases either too slow or not accurate enough to reach the level of requirements. This work explores the use of deep neural networks to … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 46 publications
(28 citation statements)
references
References 44 publications
0
26
0
Order By: Relevance
“…The increased blending leads to new uncertainties in isolating sources, measuring their fluxes (required for photometric redshifts) and inferring their weak gravitational lensing shear estimates [29][30][31]. These effects in turn lead to subtle sample selection effects, which are amplified by the interaction of the point spread function (PSF) with blending [32][33][34]. But even with perfect measurements of fluxes, our ability to infer source redshift distributions is hampered by incompleteness in spectroscopic samples used to train and calibrate redshifts [35].…”
Section: Contentsmentioning
confidence: 99%
“…The increased blending leads to new uncertainties in isolating sources, measuring their fluxes (required for photometric redshifts) and inferring their weak gravitational lensing shear estimates [29][30][31]. These effects in turn lead to subtle sample selection effects, which are amplified by the interaction of the point spread function (PSF) with blending [32][33][34]. But even with perfect measurements of fluxes, our ability to infer source redshift distributions is hampered by incompleteness in spectroscopic samples used to train and calibrate redshifts [35].…”
Section: Contentsmentioning
confidence: 99%
“…Deblending, the process of separating overlapping or nested sources, is closely linked to source extraction; all of the tools we discuss in this paper make some attempt at deblending. However, for some scientific purposes, the tools do not produce sufficiently accurate separation of sources, leading to problems such as poor photometry (Abbott et al 2018;Huang et al 2018), and systematic measurements of physical properties such as redshift (Boucaud et al 2020) and cluster mass (Simet & Mandelbaum 2015). Consequently, several tools also exist to perform deblending as a separate process.…”
Section: Deblendingmentioning
confidence: 99%
“…Three interesting properties can be highlighted from the Figure . 1 -The capacity of the model for segmenting overlapping galaxies while retaining some uncertainty in the prediction on the overlapping region; 2 -The accurate segmentation of multiple galaxies in the field, including truncated objects, with residuals being concentrated towards the galaxy outskirts and 3 -The ability of the network to deal with empty stamps. That last case acts as a null test for the network and represents an improvement with respect to the model from Boucaud et al (2020) which would fail at predicting only a background noise.…”
Section: Imagesmentioning
confidence: 99%
“…Accurately identifying blended galaxies is therefore a key task to guarantee the full potential of the forthcoming cosmological surveys. Several machine learning (ML) based (Boucaud et al, 2020;Arcelin et al, 2021) and non-ML based solutions (Melchior et al, 2018) have been proposed, although the issue remains open. In particular, a robust quantification of uncertainty to be propagated into the final error budget generally lacks from the current approaches.…”
Section: Introductionmentioning
confidence: 99%