The influence of considering a generalized dark matter (GDM) model, which allows for a non-pressure-less dark matter and a nonvanishing sound speed in the nonlinear spherical collapse model is ...discussed for the Einstein-de Sitter-like and Λ GDM models. By assuming that the vacuum component responsible for the accelerated expansion of the Universe is not clustering and therefore behaving similarly to the cosmological constant Λ, we show how the change in the GDM characteristic parameters affects the linear density threshold for collapse of the nonrelativistic component (δc) and its virial overdensity (ΔV). We found that a positive GDM equation of state parameter, w gdm , is responsible for lower values of δc as compared to the standard spherical collapse model and that this effect is much stronger than the one induced by a change in the GDM sound speed, c2s,gdm. We also found that ΔV is only slightly affected and mostly sensitive to wgdm. These effects could be relatively enhanced for lower values of the matter density. We found that the effects of the additional physics on δc and ΔV, when translated to nonlinear observables such as the halo mass function, induce an overall deviation of about 40% with respect to the standard Λ CDM model at late times for high mass objects. However, within the current constraints for c2s,gdm and wgdm, we found that these changes are the consequence of properly taking into account the correct linear matter power spectrum for the GDM model while the effects coming from modifications in the spherical collapse model remain negligible. Using a phenomenologically motivated approach, we also study the nonlinear matter power spectrum and found that the additional properties of the dark matter component lead, in general, to a strong suppression of the nonlinear power spectrum with respect to the corresponding Λ CDM one. Finally, as a practical example, we compare Λ GDM and Λ CDM using galaxy cluster abundance measurements, and found that these small scale probes will allow us to put more stringent constraints on the nature of dark matter.
Photometric galaxy surveys probe the late-time Universe where the density field is highly non-Gaussian. A consequence is the emergence of the super-sample covariance (SSC), a non-Gaussian covariance ...term that is sensitive to fluctuations on scales larger than the survey window. In this work, we study the impact of the survey geometry on the SSC and, subsequently, on cosmological parameter inference. We devise a fast SSC approximation that accounts for the survey geometry and compare its performance to the common approximation of rescaling the results by the fraction of the sky covered by the survey,
f
SKY
, dubbed ‘full-sky approximation’. To gauge the impact of our new SSC recipe, that we call ‘partial-sky’, we perform Fisher forecasts on the parameters of the (
w
0
,
w
a
)-CDM model in a 3 × 2 point analysis, varying the survey area, the geometry of the mask, and the galaxy distribution inside our redshift bins. The differences in the marginalised forecast errors –with the full-sky approximation performing poorly for small survey areas but excellently for stage-IV-like areas– are found to be absorbed by the marginalisation on galaxy bias nuisance parameters. For large survey areas, the unmarginalised errors are underestimated by about 10% for all probes considered. This is a hint that, even for stage-IV-like surveys, the partial-sky method introduced in this work will be necessary if tight priors are applied on these nuisance parameters. We make the partial-sky method public with a new release of the public code
PySSC
.
In recent years, forecasting activities have become an important tool in designing and optimising large-scale structure surveys. To predict the performance of such surveys, the Fisher matrix ...formalism is frequently used as a fast and easy way to compute constraints on cosmological parameters. Among them lies the study of the properties of dark energy which is one of the main goals in modern cosmology. As so, a metric for the power of a survey to constrain dark energy is provided by the figure of merit (FoM). This is defined as the inverse of the surface contour given by the joint variance of the dark energy equation of state parameters {
w
0
,
w
a
} in the Chevallier-Polarski-Linder parameterization, which can be evaluated from the covariance matrix of the parameters. This covariance matrix is obtained as the inverse of the Fisher matrix. The inversion of an ill-conditioned matrix can result in large errors on the covariance coefficients if the elements of the Fisher matrix are estimated with insufficient precision. The conditioning number is a metric providing a mathematical lower limit to the required precision for a reliable inversion, but it is often too stringent in practice for Fisher matrices with sizes greater than 2 × 2. In this paper, we propose a general numerical method to guarantee a certain precision on the inferred constraints, such as the FoM. It consists of randomly vibrating (perturbing) the Fisher matrix elements with Gaussian perturbations of a given amplitude and then evaluating the maximum amplitude that keeps the FoM within the chosen precision. The steps used in the numerical derivatives and integrals involved in the calculation of the Fisher matrix elements can then be chosen accordingly in order to keep the precision of the Fisher matrix elements below this maximum amplitude. We illustrate our approach by forecasting stage IV spectroscopic surveys cosmological constraints from the galaxy power spectrum. We infer the range of steps for which the Fisher matrix approach is numerically reliable. We explicitly check that using steps that are larger by a factor of two produce an inaccurate estimation of the constraints. We further validate our approach by comparing the Fisher matrix contours to those obtained with a Monte Carlo Markov chain (MCMC) approach – in the case where the MCMC posterior distribution is close to a Gaussian – and finding excellent agreement between the two approaches.
Abstract
The forthcoming
Euclid
survey will be able to map the large scale structure with unprecedented precision, with the aim of tightly constraining the standard cosmological model and its most ...common extensions. The great sensitivity of
Euclid
can however also be exploited to test our most fundamental assumptions at the basis of the cosmological investigation. In this work we present two recent results of the Euclid Consortium, where forecast
Euclid
products are used alongside data from other surveys to constrain violation of the distance duality relation and time evolution in the fine-structure constant. We show how
Euclid
will significantly contribute in constraining these effects, both connected with the presence of new physics beyond the standard cosmological model.
Context. The Universe’s assumed homogeneity and isotropy is known as the cosmological principle. It is one of the assumptions that led to the Friedmann-Lemaître-Robertson-Walker (FLRW) metric and is ...a cornerstone of modern cosmology, because the metric plays a crucial role in the determination of the cosmological observables. Thus, it is of paramount importance to question this principle and perform observational tests that may falsify it. Aims. Here, we explore the use of galaxy cluster counts as a probe of a large-scale inhomogeneity, which is a novel approach to the study of inhomogeneous models, and we determine the precision with which future galaxy cluster surveys will be able to test the cosmological principle. Methods. We present forecast constraints on the inhomogeneous Lemaître-Tolman-Bondi (LTB) model with a cosmological constant and cold dark matter, basically a ΛCDM model endowed with a spherical, large-scale inhomogeneity, from a combination of simulated data according to a compilation of ‘Stage-IV’ galaxy surveys. For that, we followed a methodology that involves the use of a mass function correction from numerical N -body simulations of an LTB cosmology. Results. When considering the ΛCDM fiducial model as a baseline for constructing our mock catalogs, we find that our combination of the forthcoming cluster surveys will improve the constraints on the cosmological principle parameters and the FLRW parameters by about 50% with respect to previous similar forecasts performed using geometrical and linear growth of structure probes, with ±20% of variations depending on the level of knowledge of systematic effects. Conclusions. These results indicate that galaxy cluster abundances are sensitive probes of inhomogeneity and that next-generation galaxy cluster surveys will thoroughly test homogeneity at cosmological scales, tightening the constraints on possible violations of the cosmological principle in the framework of ΛLTB scenarios.
Context.
In metric theories of gravity with photon number conservation, the luminosity and angular diameter distances are related via the Etherington relation, also known as the distance duality ...relation (DDR). A violation of this relation would rule out the standard cosmological paradigm and point to the presence of new physics.
Aims.
We quantify the ability of
Euclid
, in combination with contemporary surveys, to improve the current constraints on deviations from the DDR in the redshift range 0 <
z
< 1.6.
Methods.
We start with an analysis of the latest available data, improving previously reported constraints by a factor of 2.5. We then present a detailed analysis of simulated
Euclid
and external data products, using both standard parametric methods (relying on phenomenological descriptions of possible DDR violations) and a machine learning reconstruction using genetic algorithms.
Results.
We find that for parametric methods
Euclid
can (in combination with external probes) improve current constraints by approximately a factor of six, while for non-parametric methods
Euclid
can improve current constraints by a factor of three.
Conclusions.
Our results highlight the importance of surveys like
Euclid
in accurately testing the pillars of the current cosmological paradigm and constraining physics beyond the standard cosmological model.
Upcoming surveys will map the growth of large-scale structure with unprecented precision, improving our understanding of the dark sector of the Universe. Unfortunately, much of the cosmological ...information is encoded on small scales, where the clustering of dark matter and the effects of astrophysical feedback processes are not fully understood. This can bias the estimates of cosmological parameters, which we study here for a joint analysis of mock
Euclid
cosmic shear and
Planck
cosmic microwave background data. We use different implementations for the modelling of the signal on small scales and find that they result in significantly different predictions. Moreover, the different non-linear corrections lead to biased parameter estimates, especially when the analysis is extended into the highly non-linear regime, with the Hubble constant,
H
0
, and the clustering amplitude,
σ
8
, affected the most. Improvements in the modelling of non-linear scales will therefore be needed if we are to resolve the current tension with more and better data. For a given prescription for the non-linear power spectrum, using different corrections for baryon physics does not significantly impact the precision of
Euclid
, but neglecting these correction does lead to large biases in the cosmological parameters. In order to extract precise and unbiased constraints on cosmological parameters from
Euclid
cosmic shear data, it is therefore essential to improve the accuracy of the recipes that account for non-linear structure formation, as well as the modelling of the impact of astrophysical processes that redistribute the baryons.
Context.
The data from the
Euclid
mission will enable the measurement of the angular positions and weak lensing shapes of over a billion galaxies, with their photometric redshifts obtained together ...with ground-based observations. This large dataset, with well-controlled systematic effects, will allow for cosmological analyses using the angular clustering of galaxies (GC
ph
) and cosmic shear (WL). For
Euclid
, these two cosmological probes will not be independent because they will probe the same volume of the Universe. The cross-correlation (XC) between these probes can tighten constraints and is therefore important to quantify their impact for
Euclid
.
Aims.
In this study, we therefore extend the recently published
Euclid
forecasts by carefully quantifying the impact of XC not only on the final parameter constraints for different cosmological models, but also on the nuisance parameters. In particular, we aim to decipher the amount of additional information that XC can provide for parameters encoding systematic effects, such as galaxy bias, intrinsic alignments (IAs), and knowledge of the redshift distributions.
Methods.
We follow the Fisher matrix formalism and make use of previously validated codes. We also investigate a different galaxy bias model, which was obtained from the Flagship simulation, and additional photometric-redshift uncertainties; we also elucidate the impact of including the XC terms on constraining these latter.
Results.
Starting with a baseline model, we show that the XC terms reduce the uncertainties on galaxy bias by ∼17% and the uncertainties on IA by a factor of about four. The XC terms also help in constraining the
γ
parameter for minimal modified gravity models. Concerning galaxy bias, we observe that the role of the XC terms on the final parameter constraints is qualitatively the same irrespective of the specific galaxy-bias model used. For IA, we show that the XC terms can help in distinguishing between different models, and that if IA terms are neglected then this can lead to significant biases on the cosmological parameters. Finally, we show that the XC terms can lead to a better determination of the mean of the photometric galaxy distributions.
Conclusions.
We find that the XC between GC
ph
and WL within the
Euclid
survey is necessary to extract the full information content from the data in future analyses. These terms help in better constraining the cosmological model, and also lead to a better understanding of the systematic effects that contaminate these probes. Furthermore, we find that XC significantly helps in constraining the mean of the photometric-redshift distributions, but, at the same time, it requires more precise knowledge of this mean with respect to single probes in order not to degrade the final “figure of merit”.