We use a set of seminumerical simulations based on the Zel'dovich approximation, the friends‐of‐friends algorithm and excursion set formalism to generate reionization maps of high dynamic range with a range of assumptions regarding the distribution and luminosity of ionizing sources and the spatial distribution of sinks for the ionizing radiation. We find that ignoring the inhomogeneous spatial distribution of regions of high gas density where recombinations are important – as is often done in studies of this kind – can lead to misleading conclusions regarding the topology of reionization, especially if reionization occurs in the photon‐starved regime suggested by Lyα forest data. The inhomogeneous spatial distribution of recombinations significantly reduces the mean‐free path of ionizing photons and the typical size of coherently ionized regions. Reionization proceeds then much more as an outside‐in process. Low‐density regions far from ionizing sources become ionized before regions of high gas density not hosting sources of ionizing radiation. The spatial distribution of sinks of ionization radiation also significantly affects the shape and amplitude of the power spectrum of fluctuations of 21 cm emission. The slope of the 21 cm power spectrum as measured by upcoming 21 cm experiments should be able to distinguish to what extent the topology of reionization proceeds outside‐in or inside‐out, while the evolution of the amplitude of the power spectrum with increasing ionized mass fraction should be sensitive to the spatial distribution and the luminosity of ionizing sources.
Cosmological observations suggest the existence of two different kinds of energy densities dominating at small ( < ∼ 500 Mpc) and large ( > ∼ 1000 Mpc) scales. The dark matter component, which dominates at small scales, contributes Ωm ≈ 0.35 and has an equation of state p = 0, while the dark energy component, which dominates at large scales, contributes ΩV ≈ 0.65 and has an equation of state p ≃ −ρ. It is usual to postulate weakly interacting massive particles (WIMPs) for the first component and some form of scalar field or cosmological constant for the second component. We explore the possibility of a scalar field with a Lagrangian L = −V (φ) 1 − ∂ i φ∂iφ acting as both clustered dark matter and smoother dark energy and having a scale-dependent equation of state. This model predicts a relation between the ratio r = ρV /ρDM of the energy densities of the two dark components and expansion rate n of the universe [with a(t) ∝ t n ] in the form n = (2/3)(1 + r). For r ≈ 2, we get n ≈ 2 which is consistent with observations.The most conservative explanation of the current cosmological observations will require two components of dark matter. (a) First one is a dust component with the equation of state p = 0 contributing Ω m ≈ 0.35. This component clusters gravitationally at small scales (l < ∼ 500 Mpc, say) and will be able to explain observations from galactic to supercluster scales. (b) The second one is a negative pressure component with an equation of state like p = wρ with −1 < w < −0.5 contributing about Ω V ≈ 0.65. There is some leeway in the (p/ρ) of the second component but it is certain that p is negative and (p/ρ) is of order unity (for recent reviews, see [1]). The cosmological constant will provide w = −1 while several other candidates based on scalar fields with potentials [2] will provide different values for w in the acceptable range. By and large, component (b) is noticed only in the large scale expansion and it does not cluster gravitationally to a significant extent.Neither of the components (a) and (b) has laboratory evidence for its existence directly or indirectly. In this sense, cosmology requires invoking the tooth fairy twice to explain the current observations. It would be nice if a candidate could be found which can explain the observations at both small and large scales (so that the tooth fairy needs to be invoked only once). The standard cold dark matter model of the 1980's belongs to this class but -unfortunately -cannot explain the observations. It is obvious from the description in the first paragraph, that any such (single) candidate must have the capacity of leading to different equations of state at different scales and making a transition from p = 0 at small scales to p = −ρ (say) at large scales. Normal particles (that is, one-particle-excitations of standard quantum field theory) such as weakly interacting massive particles (WIMPs) will usually lead to the equation of state p = 0 at all scales. On the other hand, homogeneous * nabhan@iucaa.ernet.in ; http://www.iucaa.ernet.in/~pad...
A self‐consistent formalism to jointly study cosmic reionization and thermal history of the intergalactic medium (IGM) in a ΛCDM cosmology is presented. The model implements most of the relevant physics governing these processes, such as the inhomogeneous IGM density distribution, three different classes of ionizing photon sources [massive Population III (PopIII) stars, Population II (PopII) stars and quasi‐stellar objects (QSOs)], and radiative feedback inhibiting star formation in low‐mass galaxies. By constraining the model free parameters with available data on redshift evolution of Lyman‐limit absorption systems, Gunn–Peterson and electron scattering optical depths, near‐infrared background and cosmic star formation history, we select a fiducial model, whose main predictions are as follows. Hydrogen was completely reionized at z≈ 15, while He ii must have been reionized by z≈ 12, allowing for the uncertainties in the ionizing photon efficiencies of stars. At z≈ 7, He iii suffered an almost complete recombination as a result of the extinction of PopIII stars, as required by the interpretation of the NIRB. A QSO‐induced complete He ii reionization occurs at z= 3.5; a similar double H reionization does not take place due to the large number of photons with energies >13.6 eV from PopII stars and QSOs, even after all PopIII stars have disappeared. Following reionization, the temperature of the IGM corresponding to the mean gas density, T0, is boosted to 1.5 × 104 K; following that it decreases with a relatively flat trend. Observations of T0 are consistent with the fact that He is singly ionized at z≳ 3.5, while they are consistent with He being doubly ionized at z≲ 3.5. This might be interpreted as a signature of (second) He ii reionization. Only 0.3 per cent of the stars produced by z= 2 need to be PopIII stars in order to achieve the first hydrogen reionization.In addition, we get useful constraints on the ionizing photon efficiencies (which are a combination of the star‐forming efficiency and the escape fraction of ionizing photons from collapsed haloes) of PopII and PopIII stars, namely, εPopII < 0.01, 0.002 < εPopIII < 0.03. Varying the efficiencies in these two ranges does not affect the scenario described above. Such a model not only relieves the tension between the Gunn–Peterson optical depth and WMAP observations, but also accounts self‐consistently for all known observational constraints. We discuss how the results compare with recent numerical reionization studies and other theoretical arguments.
Abstract. We extend our previous analysis of cosmological supernova type Ia data (Padmanabhan & Choudhury 2003) to include three recent compilation of data sets. Our analysis ignores the possible correlations and systematic effects present in the data and concentrates mostly on some key theoretical issues. Among the three data sets, the first set consists of 194 points obtained from various observations while the second discards some of the points from the first one because of large uncertainties and thus consists of 142 points. The third data set is obtained from the second by adding the latest 14 points observed through HST. A careful comparison of these different data sets help us to draw the following conclusions: (i) All the three data sets strongly rule out non-accelerating models. Interestingly, the first and the second data sets favour a closed universe; if Ω tot ≡ Ω m + Ω Λ , then the probability of obtaining models with Ω tot > 1 is > ∼ 0.97. Hence these data sets are in mild disagreement with the "concordance" flat model. However, this disagreement is reduced (the probability of obtaining models with Ω tot > 1 being ≈0.9) for the third data set, which includes the most recent points observed by HST around 1 < z < 1.6. (ii) When the first data set is divided into two separate subsets consisting of low (z < 0.34) and high (z > 0.34) redshift supernova, it turns out that these two subsets, individually, admit non-accelerating models with zero dark energy because of different magnitude zero-point values for the different subsets. This can also be seen when the data is analysed while allowing for possibly different magnitude zero-points for the two redshift subsets. However, the non-accelerating models seem to be ruled out using only the low redshift data for the other two data sets, which have less uncertainties. (iii) We have also found that it is quite difficult to measure the evolution of the dark energy equation of state w X (z) though its present value can be constrained quite well. The best-fit value seems to mildly favour a dark energy component with current equation of state w X < −1, thus opening the possibility of existence of more exotic forms of matter. However, the data is still consistent with the the standard cosmological constant at 99 per cent confidence level for Ω m > ∼ 0.2.
Using our cosmological radiative transfer code, we study the implications of the updated quasi-stellar object (QSO) emissivity and star formation history for the escape fraction (f esc ) of hydrogen ionizing photons from galaxies. We estimate the f esc that is required to reionize the Universe and to maintain the ionization state of the intergalactic medium in the post-reionization era. At z > 5.5, we show that a constant f esc of 0.14 to 0.22 is sufficient to reionize the Universe. At z < 3.5, consistent with various observations, we find that f esc can have values from 0 to 0.05. However, a steep rise in f esc , of at least a factor of ∼ 3, is required between z = 3.5 to 5.5. It results from a rapidly decreasing QSO emissivity at z > 3 together with a nearly constant measured H i photoionization rates at 3 < z < 5. We show that this requirement of a steep rise in f esc over a very short time can be relaxed if we consider the contribution from a recently found large number density of faint QSOs at z 4. In addition, a simple extrapolation of the contribution of such QSOs to high-z suggests that QSOs alone can reionize the Universe. This implies, at z > 3.5, that either the properties of galaxies should evolve rapidly to increase the f esc or most of the low-mass galaxies should host massive black holes and sustain accretion over a prolonged period. These results motivate a careful investigation of theoretical predictions of these alternate scenarios that can be distinguished using future observations. Moreover, it is also very important to revisit the measurements of H i photoionization rates that are crucial to the analysis presented here.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.