2022
DOI: 10.3847/1538-4357/ac6d63
|View full text |Cite
|
Sign up to set email alerts
|

Mining for Strong Gravitational Lenses with Self-supervised Learning

Abstract: We employ self-supervised representation learning to distill information from 76 million galaxy images from the Dark Energy Spectroscopic Instrument Legacy Imaging Surveys’ Data Release 9. Targeting the identification of new strong gravitational lens candidates, we first create a rapid similarity search tool to discover new strong lenses given only a single labeled example. We then show how training a simple linear classifier on the self-supervised representations, requiring only a few minutes on a CPU, can au… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(13 citation statements)
references
References 53 publications
0
13
0
Order By: Relevance
“…Hayat et al (2021) showed indeed that, by using the latent space for morphological classification of galaxies, they can reach a similar accuracy as with a fully supervised CNN but using >10 times less labelled data. In the follow-up work by Stein et al (2021a), they also explore how the self-supervised representations can be used to find strong gravitational lenses reaching similar conclusions. Similarly to the work by Cheng et al (2020), the representations are used with a small sample of lenses to train a simple linear classifier and find new strong lenses candidates.…”
Section: Visualisation Of Large Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…Hayat et al (2021) showed indeed that, by using the latent space for morphological classification of galaxies, they can reach a similar accuracy as with a fully supervised CNN but using >10 times less labelled data. In the follow-up work by Stein et al (2021a), they also explore how the self-supervised representations can be used to find strong gravitational lenses reaching similar conclusions. Similarly to the work by Cheng et al (2020), the representations are used with a small sample of lenses to train a simple linear classifier and find new strong lenses candidates.…”
Section: Visualisation Of Large Datasetsmentioning
confidence: 99%
“…An additional way to identify outliers is through the representation space learned by contrastive learning, as one would do with a latent space from an Autoeconder. Stein et al (2021b) explored this approach on images from the DESI survey and demonstrated the efficiency of self-supervised representations to identify outliers and perform similarity searches. See also the work by Walmsley et al (2021) which uses the tools developed by Lochner & Bassett (2021) for similarity search and anomaly detection on the representation spaces.…”
Section: Deep Learning For Discoverymentioning
confidence: 99%
“…In [54] this same technique was used to construct representations for CWoLa-based anomaly detection. In addition to these works, other self-supervised / representation learning techniques have been applied in particle physics [55,56] and in other scientific disciplines such as astrophysics [57][58][59][60]. In [53,54] the augmentations corresponded to transformations of the event to which the underlying physics should be invariant to rotations or translations, but also soft-collinear parton splittings.…”
Section: Introductionmentioning
confidence: 99%
“…The key principle is to extract underlying patterns in data by maximizing similarities of augmentations from the same instances while minimizing the similarity of different instances 31 . Recently, contrastive learning has attracted increasing attention in the natural sciences and has shown remarkable results on a variety of scienti c problems, including molecular representation 40,41 , density-of-states of 2D photonic crystals 42 , similarity search for sky surveys 43 , single-particle diffraction images 44 , and Raman spectrum. In particular, Ref.…”
Section: Introductionmentioning
confidence: 99%