A major quest in cosmology is to understand the nature of dark energy. It is now well known that the use of several cosmological probes is required to break the underlying degeneracies on ...cosmological parameters. In this paper, we present a method based on a frequentist approach that combines probes without any prior constraints. As one application, the current supernovae type Ia and cosmic microwave background data are analyzed with an evolving dark energy component, and our results are first compared to other analyses. We emphasize the consequences of implementing the dark energy perturbations for an equation of state that varies with time. We then simulate the expectations of different future projects. The constraints from weak lensing surveys on the measurement of dark energy evolution are combined with the measurements from the cosmic microwave background and type Ia supernovae. We present the impacts for mid-term and long-term surveys and confirm that the combination with weak lensing is very powerful in breaking parameter degeneracies. A second generation of experiments is, however, required to achieve a 0.1 error on the parameters describing the evolution of dark energy.
Euclid
is poised to survey galaxies across a cosmological volume of unprecedented size, providing observations of more than a billion objects distributed over a third of the full sky. Approximately ...20 million of these galaxies will have their spectroscopy available, allowing us to map the three-dimensional large-scale structure of the Universe in great detail. This paper investigates prospects for the detection of cosmic voids therein and the unique benefit they provide for cosmological studies. In particular, we study the imprints of dynamic (redshift-space) and geometric (Alcock–Paczynski) distortions of average void shapes and their constraining power on the growth of structure and cosmological distance ratios. To this end, we made use of the Flagship mock catalog, a state-of-the-art simulation of the data expected to be observed with
Euclid
. We arranged the data into four adjacent redshift bins, each of which contains about 11 000 voids and we estimated the stacked void-galaxy cross-correlation function in every bin. Fitting a linear-theory model to the data, we obtained constraints on
f
/
b
and
D
M
H
, where
f
is the linear growth rate of density fluctuations,
b
the galaxy bias,
D
M
the comoving angular diameter distance, and
H
the Hubble rate. In addition, we marginalized over two nuisance parameters included in our model to account for unknown systematic effects in the analysis. With this approach,
Euclid
will be able to reach a relative precision of about 4% on measurements of
f
/
b
and 0.5% on
D
M
H
in each redshift bin. Better modeling or calibration of the nuisance parameters may further increase this precision to 1% and 0.4%, respectively. Our results show that the exploitation of cosmic voids in
Euclid
will provide competitive constraints on cosmology even as a stand-alone probe. For example, the equation-of-state parameter,
w
, for dark energy will be measured with a precision of about 10%, consistent with previous more approximate forecasts.
Euclid preparation Barnett, R.; Warren, S. J.; Mortlock, D. J. ...
Astronomy and astrophysics (Berlin),
11/2019, Letnik:
631
Journal Article
Recenzirano
Odprti dostop
We provide predictions of the yield of 7 <
z
< 9 quasars from the
Euclid
wide survey, updating the calculation presented in the
Euclid
Red Book in several ways. We account for revisions to the
...Euclid
near-infrared filter wavelengths; we adopt steeper rates of decline of the quasar luminosity function (QLF; Φ) with redshift, Φ ∝ 10
k
(
z
− 6)
,
k
= −0.72, and a further steeper rate of decline,
k
= −0.92; we use better models of the contaminating populations (MLT dwarfs and compact early-type galaxies); and we make use of an improved Bayesian selection method, compared to the colour cuts used for the Red Book calculation, allowing the identification of fainter quasars, down to
J
AB
∼ 23. Quasars at
z
> 8 may be selected from
Euclid
O
Y
J
H
photometry alone, but selection over the redshift interval 7 <
z
< 8 is greatly improved by the addition of
z
-band data from, e.g., Pan-STARRS and LSST. We calculate predicted quasar yields for the assumed values of the rate of decline of the QLF beyond
z
= 6. If the decline of the QLF accelerates beyond
z
= 6, with
k
= −0.92,
Euclid
should nevertheless find over 100 quasars with 7.0 <
z
< 7.5, and ∼25 quasars beyond the current record of
z
= 7.5, including ∼8 beyond
z
= 8.0. The first
Euclid
quasars at
z
> 7.5 should be found in the DR1 data release, expected in 2024. It will be possible to determine the bright-end slope of the QLF, 7 <
z
< 8,
M
1450
< −25, using 8 m class telescopes to confirm candidates, but follow-up with JWST or E-ELT will be required to measure the faint-end slope. Contamination of the candidate lists is predicted to be modest even at
J
AB
∼ 23. The precision with which
k
can be determined over 7 <
z
< 8 depends on the value of
k
, but assuming
k
= −0.72 it can be measured to a 1
σ
uncertainty of 0.07.
The
Euclid
mission – with its spectroscopic galaxy survey covering a sky area over 15 000 deg
2
in the redshift range 0.9 <
z
< 1.8 – will provide a sample of tens of thousands of cosmic voids. ...This paper thoroughly explores for the first time the constraining power of the void size function on the properties of dark energy (DE) from a survey mock catalogue, the official
Euclid
Flagship simulation. We identified voids in the Flagship light-cone, which closely matches the features of the upcoming
Euclid
spectroscopic data set. We modelled the void size function considering a state-of-the art methodology: we relied on the volume-conserving (Vdn) model, a modification of the popular Sheth & van de Weygaert model for void number counts, extended by means of a linear function of the large-scale galaxy bias. We found an excellent agreement between model predictions and measured mock void number counts. We computed updated forecasts for the
Euclid
mission on DE from the void size function and provided reliable void number estimates to serve as a basis for further forecasts of cosmological applications using voids. We analysed two different cosmological models for DE: the first described by a constant DE equation of state parameter,
w
, and the second by a dynamic equation of state with coefficients
w
0
and
w
a
. We forecast 1
σ
errors on
w
lower than 10% and we estimated an expected figure of merit (FoM) for the dynamical DE scenario FoM
w
0
,
w
a
= 17 when considering only the neutrino mass as additional free parameter of the model. The analysis is based on conservative assumptions to ensure full robustness, and is a pathfinder for future enhancements of the technique. Our results showcase the impressive constraining power of the void size function from the
Euclid
spectroscopic sample, both as a stand-alone probe, and to be combined with other
Euclid
cosmological probes.
Context. The standard cosmological model is based on the fundamental assumptions of a spatially homogeneous and isotropic universe on large scales. An observational detection of a violation of these ...assumptions at any redshift would immediately indicate the presence of new physics.
Aims. We quantify the ability of the Euclid mission, together with contemporary surveys, to improve the current sensitivity of null tests of the canonical cosmological constant Λ and the cold dark matter (ΛCDM) model in the redshift range 0 < z < 1.8.
Methods. We considered both currently available data and simulated Euclid and external data products based on a ΛCDM fiducial model, an evolving dark energy model assuming the Chevallier-Polarski-Linder parameterization or an inhomogeneous Lemaître-Tolman-Bondi model with a cosmological constant Λ, and carried out two separate but complementary analyses: a machine learning reconstruction of the null tests based on genetic algorithms, and a theory-agnostic parametric approach based on Taylor expansion and binning of the data, in order to avoid assumptions about any particular model.
Results. We find that in combination with external probes, Euclid can improve current constraints on null tests of the ΛCDM by approximately a factor of three when using the machine learning approach and by a further factor of two in the case of the parametric approach. However, we also find that in certain cases, the parametric approach may be biased against or missing some features of models far from ΛCDM.
Conclusions. Our analysis highlights the importance of synergies between Euclid and other surveys. These synergies are crucial for providing tighter constraints over an extended redshift range for a plethora of different consistency tests of some of the main assumptions of the current cosmological paradigm.
The
Euclid
space telescope will survey a large dataset of cosmic voids traced by dense samples of galaxies. In this work we estimate its expected performance when exploiting angular photometric void ...clustering, galaxy weak lensing, and their cross-correlation. To this aim, we implemented a Fisher matrix approach tailored for voids from the
Euclid
photometric dataset and we present the first forecasts on cosmological parameters that include the void-lensing correlation. We examined two different probe settings, pessimistic and optimistic, both for void clustering and galaxy lensing. We carried out forecast analyses in four model cosmologies, accounting for a varying total neutrino mass,
M
ν
, and a dynamical dark energy (DE) equation of state,
w
(
z
), described by the popular Chevallier-Polarski-Linder parametrization. We find that void clustering constraints on
h
and Ω
b
are competitive with galaxy lensing alone, while errors on
n
s
decrease thanks to the orthogonality of the two probes in the 2D-projected parameter space. We also note that, as a whole, with respect to assuming the two probes as independent, the inclusion of the void-lensing cross-correlation signal improves parameter constraints by 10 − 15%, and enhances the joint void clustering and galaxy lensing figure of merit (FoM) by 10% and 25%, in the pessimistic and optimistic scenarios, respectively. Finally, when further combining with the spectroscopic galaxy clustering, assumed as an independent probe, we find that, in the most competitive case, the FoM increases by a factor of 4 with respect to the combination of weak lensing and spectroscopic galaxy clustering taken as independent probes. The forecasts presented in this work show that photometric void clustering and its cross-correlation with galaxy lensing deserve to be exploited in the data analysis of the
Euclid
galaxy survey and promise to improve its constraining power, especially on
h
, Ω
b
, the neutrino mass, and the DE evolution.
Aims.
We investigate the contribution of shot-noise and sample variance to uncertainties in the cosmological parameter constraints inferred from cluster number counts, in the context of the
Euclid
...survey.
Methods.
By analysing 1000
Euclid
-like light cones, produced with the PINOCCHIO approximate method, we validated the analytical model of Hu & Kravtsov (2003, ApJ, 584, 702) for the covariance matrix, which takes into account both sources of statistical error. Then, we used such a covariance to define the likelihood function that is better equipped to extract cosmological information from cluster number counts at the level of precision that will be reached by the future
Euclid
photometric catalogs of galaxy clusters. We also studied the impact of the cosmology dependence of the covariance matrix on the parameter constraints.
Results.
The analytical covariance matrix reproduces the variance measured from simulations within the 10 percent; such a difference has no sizeable effect on the error of cosmological parameter constraints at this level of statistics. Also, we find that the Gaussian likelihood with full covariance is the only model that provides an unbiased inference of cosmological parameters without underestimating the errors, and that the cosmology-dependence of the covariance must be taken into account.
We present the first analysis of the Early Release Observations (ERO) program that targets fields around two lensing clusters, Abell 2390 and Abell 2764. We use imaging data from the Visible ...instrument (VIS) and the Near-Infrared Spectrometer and Photometer (NISP) to produce photometric catalogs for a total of $ 500\,000$ objects. The imaging data reach a typical depth of $5\ in the range 25.1--25.4 AB in the NISP bands and 27.1--27.3 AB in the VIS band. Using the Lyman-break method in combination with photometric redshifts, we searched for high-redshift galaxies. We identified $30$ Lyman-break galaxy (LBG) candidates at $z>6$ and 139 extremely red sources (ERSs), most of which likely lie at lower redshift. The VIS imaging is deeper than the NISP imaging, which means that we can routinely identify high-redshift Lyman-break galaxies at about a magnitude of 3, which reduces contamination by brown dwarf stars and low-redshift galaxies. The difficulty of spatially resolving most of these sources in 0 $ imaging means that it is difficult to distinguish between galaxies and quasars. Spectroscopic follow-up campaigns of these bright sources will help us to constrain the bright end of the ultraviolet galaxy luminosity function and the quasar luminosity function at $z>6$, and it will constrain the physical nature of these objects. Additionally, we performed a combined strong- and weak-lensing analysis of A2390, and we show that will contribute to constraining the virial mass of galaxy clusters better. We also identify optical and near-infrared counterparts of known $z>0.6$ clusters in these data. These counterparts exhibit strong-lensing features. This establishes that can characterize high-redshift clusters. Finally, we provide a glimpse of the ability of to map the intracluster light out to larger radii than current facilities, which enables us to understand the cluster assembly history better and to map the dark matter distribution. This initial dataset illustrates the diverse spectrum of legacy science that is possible with the survey.
Verifying the fully kinematic nature of the long-known cosmic microwave background (CMB) dipole is of fundamental importance in cosmology. In the standard cosmological model with the ...Friedman--Lemaitre--Robertson--Walker (FLRW) metric from the inflationary expansion, the CMB dipole should be entirely kinematic. Any non-kinematic CMB dipole component would thus reflect the preinflationary structure of space-time probing the extent of the FLRW applicability. Cosmic backgrounds from galaxies after the matter-radiation decoupling should have a kinematic dipole component identical in velocity to the CMB kinematic dipole. Comparing the two can lead to isolating the CMB non-kinematic dipole. It was recently proposed that such a measurement can be done using the near-infrared cosmic infrared background (CIB) measured with the currently operating telescope, and later with Roman . The proposed method reconstructs the resolved CIB, the integrated galaxy light (IGL), from Wide Survey and probes its dipole with a kinematic component amplified over that of the CMB by the Compton--Getting effect. The amplification coupled with the extensive galaxy samples forming the IGL would determine the CIB dipole with an overwhelming signal-to-noise ratio, isolating its direction to sub-degree accuracy. We developed details of the method for Wide Survey in four bands spanning from 0.6 to 2 We isolated the systematic and other uncertainties and present methodologies to minimize them, after confining the sample to the magnitude range with a negligible IGL--CIB dipole from galaxy clustering. These include the required star--galaxy separation, accounting for the extinction correction dipole using the new method developed here achieving total separation, and accounting for the Earth's orbital motion and other systematic effects. Finally, we applied the developed methodology to the simulated galaxy catalogs, successfully testing the upcoming applications. With the techniques presented, one would indeed measure the IGL--CIB dipole from Wide Survey with high precision, probing the non-kinematic CMB dipole.
Multi-object spectroscopic galaxy surveys typically make use of photometric and colour criteria to select their targets. That is not the case of which will use the NISP slitless spectrograph to ...record spectra for every source over its field of view. Slitless spectroscopy has the advantage of avoiding defining a priori a specific galaxy sample, but at the price of making the selection function harder to quantify. In its Wide Survey was designed to build robust statistical samples of emission-line galaxies with fluxes brighter than $ 2e-16 erg s cm $, using the Halpha -$ N ii right $ complex to measure redshifts within the range $ $. Given the expected signal-to-noise ratio of NISP spectra, at such faint fluxes a significant contamination by incorrectly measured redshifts is expected, either due to misidentification of other emission lines, or to noise fluctuations mistaken as such, with the consequence of reducing the purity of the final samples. This can be significantly ameliorated by exploiting the extensive photometric information to identify emission-line galaxies over the redshift range of interest. Beyond classical multi-band selections in colour space, machine learning techniques provide novel tools to perform this task. Here, we compare and quantify the performance of six such classification algorithms in achieving this goal. We consider the case when only the photometric and morphological measurements are used, and when these are supplemented by the extensive set of ancillary ground-based photometric data, which are part of the overall scientific strategy to perform lensing tomography. The classifiers are trained and tested on two mock galaxy samples, the EL-COSMOS and Euclid Flagship2 catalogues. The best performance is obtained from either a dense neural network or a support vector classifier, with comparable results in terms of the adopted metrics. When training on on-board photometry alone, these are able to remove $87<!PCT!>$ of the sources that are fainter than the nominal flux limit or lie outside the $0.9<z<1.8$ redshift range, a figure that increases to $97<!PCT!>$ when ground-based photometry is included. These results show how by using the photometric information available to it will be possible to efficiently identify and discard spurious interlopers, allowing us to build robust spectroscopic samples for cosmological investigations.