2022
DOI: 10.48550/arxiv.2209.07542
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Toward an understanding of the properties of neural network approaches for supernovae light curve approximation

Abstract: The modern time-domain photometric surveys collect a lot of observations of various astronomical objects, and the coming era of large-scale surveys will provide even more information. Most of the objects have never received a spectroscopic follow-up, which is especially crucial for transients e.g. supernovae. In such cases, observed light curves could present an affordable alternative. Time series are actively used for photometric classification and characterization, such as peak and luminosity decline estimat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 51 publications
0
1
0
Order By: Relevance
“…Although the overwhelming majority of photometric classification efforts invoke machine-learning algorithms and deeplearning architectures, there is a variety of approaches for common tasks. When extracting light-curve features, some fit an empirical functional form (Bazin et al 2009;Karpenka et al 2012;Villar et al 2019;Sánchez-Sáez et al 2021); some estimate a smooth approximation to the light curve using Gaussian process interpolation (Lochner et al 2016;Boone 2019;Alves et al 2022), multilayer perceptrons (MLPs; Demianenko et al 2022), normalizing flows (Demianenko et al 2022); some apply neural networks such as temporal convolutional networks (Muthukrishna et al 2019), recurrent neural networks (Charnock & Moss 2017;Möller et al 2021;Gagliano et al 2022a), convolutional neural networks (Pasquet et al 2019b;Qu et al 2021;Burhanudin & Maund 2023), Bayesian neural networks (Demianenko et al 2022), variational autoencoders (VAEs; Boone 2021); or finally, some use a mix of the above. In this work, we use VAEs (Kingma & Welling 2013) from the literature; a VAE model approximates the input's posterior distribution over the (low-dimensional) latent space using variational inference called an "encoder," from which a generative model ("decoder") reconstructs the input given a position in latent space.…”
Section: Introductionmentioning
confidence: 99%
“…Although the overwhelming majority of photometric classification efforts invoke machine-learning algorithms and deeplearning architectures, there is a variety of approaches for common tasks. When extracting light-curve features, some fit an empirical functional form (Bazin et al 2009;Karpenka et al 2012;Villar et al 2019;Sánchez-Sáez et al 2021); some estimate a smooth approximation to the light curve using Gaussian process interpolation (Lochner et al 2016;Boone 2019;Alves et al 2022), multilayer perceptrons (MLPs; Demianenko et al 2022), normalizing flows (Demianenko et al 2022); some apply neural networks such as temporal convolutional networks (Muthukrishna et al 2019), recurrent neural networks (Charnock & Moss 2017;Möller et al 2021;Gagliano et al 2022a), convolutional neural networks (Pasquet et al 2019b;Qu et al 2021;Burhanudin & Maund 2023), Bayesian neural networks (Demianenko et al 2022), variational autoencoders (VAEs; Boone 2021); or finally, some use a mix of the above. In this work, we use VAEs (Kingma & Welling 2013) from the literature; a VAE model approximates the input's posterior distribution over the (low-dimensional) latent space using variational inference called an "encoder," from which a generative model ("decoder") reconstructs the input given a position in latent space.…”
Section: Introductionmentioning
confidence: 99%