2023
DOI: 10.3847/1538-4357/aca8fe
|View full text |Cite
|
Sign up to set email alerts
|

Calibrating Cosmological Simulations with Implicit Likelihood Inference Using Galaxy Growth Observables

Abstract: In a novel approach employing implicit likelihood inference (ILI), also known as likelihood-free inference, we calibrate the parameters of cosmological hydrodynamic simulations against observations, which has previously been unfeasible due to the high computational cost of these simulations. For computational efficiency, we train neural networks as emulators on ∼1000 cosmological simulations from the CAMELS project to estimate simulated observables, taking as input the cosmological and astrophysical parameters… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 121 publications
(174 reference statements)
0
5
0
Order By: Relevance
“…We emphasize that this does not mean that the model uses information from individual galaxies and somehow stacks the results. Even using this subset of variables, one can construct noisy estimates of properties, like the stellar mass function, expected to be affected by cosmology (Jo et al 2023). Therefore, the source of information may arise from both individual galaxies and collective properties.…”
Section: Villaescusamentioning
confidence: 99%
See 1 more Smart Citation

Cosmology with Multiple Galaxies

Chawak,
Villaescusa-Navarro,
Echeverri-Rojas
et al. 2024
ApJ
Self Cite
“…We emphasize that this does not mean that the model uses information from individual galaxies and somehow stacks the results. Even using this subset of variables, one can construct noisy estimates of properties, like the stellar mass function, expected to be affected by cosmology (Jo et al 2023). Therefore, the source of information may arise from both individual galaxies and collective properties.…”
Section: Villaescusamentioning
confidence: 99%
“…However, by using multiple galaxies, it should also be possible to infer the value of cosmological and astrophysical parameters by characterizing the impact on galaxy statistics like the stellar mass function. Recently, Busillo et al (2023) have shown that galaxy scaling relations are sensitive to both cosmology and astrophysics and derived constraints on those from real data (see also Jo et al 2023 for the impact on the star formation rate history and the stellar mass function). In this work, we thus ask ourselves how well we can infer cosmological parameters if we only have a few galaxies.…”
Section: Introductionmentioning
confidence: 99%

Cosmology with Multiple Galaxies

Chawak,
Villaescusa-Navarro,
Echeverri-Rojas
et al. 2024
ApJ
Self Cite
“…However, full hydrodynamic simulations are computationally expensive to run, often limited to either small volumes or low resolution. This makes it difficult to thoroughly explore the parameter space of the subgrid recipes (though even CAMELS is helping to address this in Jo et al 2023).…”
Section: The Santa Cruz Semi-analytic Model In Contextmentioning
confidence: 99%
“…For a neural network to learn a posterior, it requires a loss function to measure its performance (i.e., calculate the gradients it uses to update the weights between neurons in order to eventually converge on values closest to the true ones). We perform both parameter regression with a standard mean-squared error (MSE) validation criterion, and likelihood-free inference (LFI) with the method from Jeffrey & Wandelt (2020), updated for CAMELS in Villaescusa-Navarro et al (2022a) and featured in Jo et al (2023). Our parameter regression is a fast and straightforward way to measure the mean of the posterior and therefore roughly approximate the network's accuracy, while the latter trains for longer in order to also measure the posterior's standard deviation.…”
Section: Loss Functionsmentioning
confidence: 99%
See 1 more Smart Citation