We present a new power spectrum emulator named EuclidEmulator that estimates the nonlinear correction to the linear dark matter power spectrum depending on the six cosmological parameters ω b , ω m , n s , h, w 0 , and σ 8. It is constructed using the uncertainty quantification software UQLab using a spectral decomposition method called polynomial chaos expansion. All steps in its construction have been tested and optimized: the large highresolution N-body simulations carried out with PKDGRAV3 were validated using a simulation from the Euclid Flagship campaign and demonstrated to have converged up to wavenumbers k ≈ 5 h Mpc −1 for redshifts z ≤ 5. The emulator is based on 100 input cosmologies simulated in boxes of (1250 Mpc/h) 3 using 2048 3 particles. We show that by creating mock emulators it is possible to successfully predict and optimize the performance of the final emulator prior to performing any N-body simulations. The absolute accuracy of the final nonlinear power spectrum is as good as one obtained with N-body simulations, conservatively, ∼1 per cent for k 1 h Mpc −1 and z 1. This enables efficient forward modelling in the nonlinear regime, allowing for estimation of cosmological parameters using Markov Chain Monte Carlo methods. EuclidEmulator has been compared to HALOFIT, CosmicEmu, and NGenHalofit, and shown to be more accurate than these other approaches. This work paves a new way for optimal construction of future emulators that also consider other cosmological observables, use higher resolution input simulations, and investigate higher dimensional cosmological parameter spaces.
Baryonic feedback effects lead to a suppression of the weak lensing angular power spectrum on small scales. The poorly constrained shape and amplitude of this suppression is an important source of uncertainties for upcoming cosmological weak-lensing surveys such as Euclid or LSST. In this first paper in a series of two, we use simulations to build a Euclidlike tomographic mock data-set for the cosmic shear power spectrum and the corresponding covariance matrix, which are both corrected for baryons following the baryonification method of Schneider et al. [1]. In addition, we develop an emulator to obtain fast predictions of the baryonic suppression effects, allowing us to perform a likelihood inference analysis for a standard ΛCDM cosmology with both cosmological and astrophysical parameters. Our main findings are the following: (i) ignoring baryonic effects leads to a greater than 5σ bias on the cosmological parameters Ω m and σ 8 ; (ii) restricting the analysis to the largest scales, that are mostly unaffected by baryons, makes the bias disappear, but results in a blow-up of the Ω m -σ 8 contour area by more than a factor of 10; (iii) ignoring baryonic effects on
We present a new, updated version of the EuclidEmulator (called EuclidEmulator2), a fast and accurate predictor for the nonlinear correction of the matter power spectrum. 2 per cent-level accurate emulation is now supported in the eight-dimensional parameter space of w0waCDM+∑mν models between redshift z = 0 and z = 3 for spatial scales within the range 0.01 h Mpc−1 ≤ k ≤ 10 h Mpc−1. In order to achieve this level of accuracy, we have had to improve the quality of the underlying N-body simulations used as training data: (i) we use self-consistent linear evolution of non-dark matter species such as massive neutrinos, photons, dark energy and the metric field, (ii) we perform the simulations in the so-called N-body gauge, which allows one to interpret the results in the framework of general relativity, (iii) we run over 250 high-resolution simulations with 30003 particles in boxes of 1(h−1 Gpc)3 volumes based on paired-and-fixed initial conditions and (iv) we provide a resolution correction that can be applied to emulated results as a post-processing step in order to drastically reduce systematic biases on small scales due to residual resolution effects in the simulations. We find that the inclusion of the dynamical dark energy parameter wa significantly increases the complexity and expense of creating the emulator. The high fidelity of EuclidEmulator2 is tested in various comparisons against N-body simulations as well as alternative fast predictors like HALOFIT, HMCode and CosmicEmu. A blind test is successfully performed against the Euclid Flagship v2.0 simulation. Nonlinear correction factors emulated with EuclidEmulator2 are accurate at the level of $1{{\ \rm per\ cent}}$ or better for 0.01 h Mpc−1 ≤ k ≤ 10 h Mpc−1 and z ≤ 3 compared to high-resolution dark matter only simulations. EuclidEmulator2 is publicly available at https://github.com/miknab/EuclidEmulator2.
An accurate modelling of baryonic feedback effects is required to exploit the full potential of future weak-lensing surveys such as Euclid or LSST. In this second paper in a series of two, we combine Euclid-like mock data of the cosmic shear power spectrum with an eROSITA X-ray mock of the cluster gas fraction to run a combined likelihood analysis including both cosmological and baryonic parameters. Following the first paper of this series, the baryonic effects (based on the baryonic correction model of Ref.[1]) are included in both the tomographic power spectrum and the covariance matrix. However, this time we assume the more realistic case of a ΛCDM cosmology with massive neutrinos and we consider several extensions of the currently favoured cosmological model. For the standard ΛCDM case, we show that including X-ray data reduces the uncertainties on the sum of the neutrino mass by ∼ 30 percent, while there is only a mild improvement on other parameters such as Ω m and σ 8 . As extensions of ΛCDM, we consider the cases of a dynamical dark energy model (wCDM), a f (R) gravity model (fRCDM), and a mixed dark matter model (ΛMDM) with both a cold and a warm/hot dark matter component. We find that combining weak-lensing with X-ray data only leads to a mild improvement of the constraints on the additional parameters of wCDM, while the improvement is more substantial for both fRCDM and ΛMDM. Ignoring baryonic effects in the analysis pipeline leads significant false-detections of either phantom dark energy or a light subdominant dark matter component. Overall we conclude that for all cosmologies considered, a general parametrisation of baryonic effects is both necessary and sufficient to obtain tight constraints on cosmological parameters.
We present N -body simulations which are fully compatible with general relativity, with dark energy consistently included at both the background and perturbation level. We test our approach for dark energy parameterised as both a fluid, and using the parameterised post-Friedmann (PPF) formalism. In most cases, dark energy is very smooth relative to dark matter so that its leading effect on structure formation is the change to the background expansion rate. This can be easily incorporated into Newtonian N -body simulations by changing the Friedmann equation. However, dark energy perturbations and relativistic corrections can lead to differences relative to Newtonian N -body simulations at the tens of percent level for scales k < (10 −3 -10 −2 ) Mpc −1 , and given the accuracy of upcoming large scale structure surveys such effects must be included. In this paper we will study both effects in detail and highlight the conditions under which they are important. We also show that our N -body simulations exactly reproduce the results of the Boltzmann solver class for all scales which remain linear. arXiv:1904.05210v1 [astro-ph.CO]
In the late stages of terrestrial planet formation, pairwise collisions between planetary-sized bodies act as the fundamental agent of planet growth. These collisions can lead to either growth or disruption of the bodies involved and are largely responsible for shaping the final characteristics of the planets. Despite their critical role in planet formation, an accurate treatment of collisions has yet to be realized. While semi-analytic methods have been proposed, they remain limited to a narrow set of post-impact properties and have only achieved relatively low accuracies. However, the rise of machine learning and access to increased computing power have enabled novel data-driven approaches. In this work, we show that data-driven emulation techniques are capable of classifying and predicting the outcome of collisions with high accuracy and are generalizable to any quantifiable post-impact quantity. In particular, we focus on the dataset requirements, training pipeline, and classification and regression performance for four distinct data-driven techniques from machine learning (ensemble methods and neural networks) and uncertainty quantification (Gaussian processes and polynomial chaos expansion). We compare these methods to existing analytic and semi-analytic methods. Such data-driven emulators are poised to replace the methods currently used in N-body simulations, while avoiding the cost of direct simulation. This work is based on a new set of 14,856 SPH simulations of pairwise collisions between rotating, differentiated bodies at all possible mutual orientations.
Weak lensing, which is the deflection of light by matter along the line of sight, has proven to be an efficient method for constraining models of structure formation and reveal the nature of dark energy. So far, most weak-lensing studies have focused on the shear field that can be measured directly from the ellipticity of background galaxies. However, within the context of forthcoming full-sky weak-lensing surveys such as Euclid, convergence maps (mass maps) offer an important advantage over shear fields in terms of cosmological exploitation. While it carry the same information, the lensing signal is more compressed in the convergence maps than in the shear field. This simplifies otherwise computationally expensive analyses, for instance, non-Gaussianity studies. However, the inversion of the non-local shear field requires accurate control of systematic effects caused by holes in the data field, field borders, shape noise, and the fact that the shear is not a direct observable (reduced shear). We present the two mass-inversion methods that are included in the official Euclid data-processing pipeline: the standard Kaiser & Squires method (KS), and a new mass-inversion method (KS+) that aims to reduce the information loss during the mass inversion. This new method is based on the KS method and includes corrections for mass-mapping systematic effects. The results of the KS+ method are compared to the original implementation of the KS method in its simplest form, using the Euclid Flagship mock galaxy catalogue. In particular, we estimate the quality of the reconstruction by comparing the two-point correlation functions and third- and fourth-order moments obtained from shear and convergence maps, and we analyse each systematic effect independently and simultaneously. We show that the KS+ method substantially reduces the errors on the two-point correlation function and moments compared to the KS method. In particular, we show that the errors introduced by the mass inversion on the two-point correlation of the convergence maps are reduced by a factor of about 5, while the errors on the third- and fourth-order moments are reduced by factors of about 2 and 10, respectively.
We implement EuclidEmulator (version 1), an emulator for the non-linear correction of the matter power spectrum, into the Markov chain Monte Carlo forecasting code MontePython. We compare the performance of HALOFIT, HMCode, and EuclidEmulator1, both at the level of power spectrum prediction and at the level of posterior probability distributions of the cosmological parameters, for different cosmological models and different galaxy power spectrum wavenumber cut-offs. We confirm that the choice of the power spectrum predictor has a non-negligible effect on the computed sensitivities when doing cosmological parameter forecasting, even for a conservative wavenumber cut-off of 0.2 h Mpc−1. We find that EuclidEmulator1 is on average up to $17{{\ \rm per\ cent}}$ more sensitive to the cosmological parameters than the other two codes, with the most significant improvements being for the Hubble parameter of up to $42{{\ \rm per\ cent}}$ and the equation of state of dark energy of up to $26{{\ \rm per\ cent}}$, depending on the case. In addition, we point out that the choice of the power spectrum predictor contributes to the risk of computing a significantly biased mean cosmology when doing parameter estimations. For the four tested scenarios we find biases, averaged over the cosmological parameters, of between 0.5 and 2σ (from below 1σ up to 6σ for individual parameters). This paper provides a proof of concept that this risk can be mitigated by taking a well-tailored theoretical uncertainty into account as this allows to reduce the bias by a factor of 2 to 5, depending on the case under consideration, while keeping posterior credibility contours small: the standard deviations are amplified by a factor of ≤1.4 in all cases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.