Euclid
is poised to survey galaxies across a cosmological volume of unprecedented size, providing observations of more than a billion objects distributed over a third of the full sky. Approximately ...20 million of these galaxies will have their spectroscopy available, allowing us to map the three-dimensional large-scale structure of the Universe in great detail. This paper investigates prospects for the detection of cosmic voids therein and the unique benefit they provide for cosmological studies. In particular, we study the imprints of dynamic (redshift-space) and geometric (Alcock–Paczynski) distortions of average void shapes and their constraining power on the growth of structure and cosmological distance ratios. To this end, we made use of the Flagship mock catalog, a state-of-the-art simulation of the data expected to be observed with
Euclid
. We arranged the data into four adjacent redshift bins, each of which contains about 11 000 voids and we estimated the stacked void-galaxy cross-correlation function in every bin. Fitting a linear-theory model to the data, we obtained constraints on
f
/
b
and
D
M
H
, where
f
is the linear growth rate of density fluctuations,
b
the galaxy bias,
D
M
the comoving angular diameter distance, and
H
the Hubble rate. In addition, we marginalized over two nuisance parameters included in our model to account for unknown systematic effects in the analysis. With this approach,
Euclid
will be able to reach a relative precision of about 4% on measurements of
f
/
b
and 0.5% on
D
M
H
in each redshift bin. Better modeling or calibration of the nuisance parameters may further increase this precision to 1% and 0.4%, respectively. Our results show that the exploitation of cosmic voids in
Euclid
will provide competitive constraints on cosmology even as a stand-alone probe. For example, the equation-of-state parameter,
w
, for dark energy will be measured with a precision of about 10%, consistent with previous more approximate forecasts.
Context.
The data from the
Euclid
mission will enable the measurement of the angular positions and weak lensing shapes of over a billion galaxies, with their photometric redshifts obtained together ...with ground-based observations. This large dataset, with well-controlled systematic effects, will allow for cosmological analyses using the angular clustering of galaxies (GC
ph
) and cosmic shear (WL). For
Euclid
, these two cosmological probes will not be independent because they will probe the same volume of the Universe. The cross-correlation (XC) between these probes can tighten constraints and is therefore important to quantify their impact for
Euclid
.
Aims.
In this study, we therefore extend the recently published
Euclid
forecasts by carefully quantifying the impact of XC not only on the final parameter constraints for different cosmological models, but also on the nuisance parameters. In particular, we aim to decipher the amount of additional information that XC can provide for parameters encoding systematic effects, such as galaxy bias, intrinsic alignments (IAs), and knowledge of the redshift distributions.
Methods.
We follow the Fisher matrix formalism and make use of previously validated codes. We also investigate a different galaxy bias model, which was obtained from the Flagship simulation, and additional photometric-redshift uncertainties; we also elucidate the impact of including the XC terms on constraining these latter.
Results.
Starting with a baseline model, we show that the XC terms reduce the uncertainties on galaxy bias by ∼17% and the uncertainties on IA by a factor of about four. The XC terms also help in constraining the
γ
parameter for minimal modified gravity models. Concerning galaxy bias, we observe that the role of the XC terms on the final parameter constraints is qualitatively the same irrespective of the specific galaxy-bias model used. For IA, we show that the XC terms can help in distinguishing between different models, and that if IA terms are neglected then this can lead to significant biases on the cosmological parameters. Finally, we show that the XC terms can lead to a better determination of the mean of the photometric galaxy distributions.
Conclusions.
We find that the XC between GC
ph
and WL within the
Euclid
survey is necessary to extract the full information content from the data in future analyses. These terms help in better constraining the cosmological model, and also lead to a better understanding of the systematic effects that contaminate these probes. Furthermore, we find that XC significantly helps in constraining the mean of the photometric-redshift distributions, but, at the same time, it requires more precise knowledge of this mean with respect to single probes in order not to degrade the final “figure of merit”.
Context.
In metric theories of gravity with photon number conservation, the luminosity and angular diameter distances are related via the Etherington relation, also known as the distance duality ...relation (DDR). A violation of this relation would rule out the standard cosmological paradigm and point to the presence of new physics.
Aims.
We quantify the ability of
Euclid
, in combination with contemporary surveys, to improve the current constraints on deviations from the DDR in the redshift range 0 <
z
< 1.6.
Methods.
We start with an analysis of the latest available data, improving previously reported constraints by a factor of 2.5. We then present a detailed analysis of simulated
Euclid
and external data products, using both standard parametric methods (relying on phenomenological descriptions of possible DDR violations) and a machine learning reconstruction using genetic algorithms.
Results.
We find that for parametric methods
Euclid
can (in combination with external probes) improve current constraints by approximately a factor of six, while for non-parametric methods
Euclid
can improve current constraints by a factor of three.
Conclusions.
Our results highlight the importance of surveys like
Euclid
in accurately testing the pillars of the current cosmological paradigm and constraining physics beyond the standard cosmological model.
Euclid preparation Barnett, R.; Warren, S. J.; Mortlock, D. J. ...
Astronomy and astrophysics (Berlin),
11/2019, Letnik:
631
Journal Article
Recenzirano
Odprti dostop
We provide predictions of the yield of 7 <
z
< 9 quasars from the
Euclid
wide survey, updating the calculation presented in the
Euclid
Red Book in several ways. We account for revisions to the
...Euclid
near-infrared filter wavelengths; we adopt steeper rates of decline of the quasar luminosity function (QLF; Φ) with redshift, Φ ∝ 10
k
(
z
− 6)
,
k
= −0.72, and a further steeper rate of decline,
k
= −0.92; we use better models of the contaminating populations (MLT dwarfs and compact early-type galaxies); and we make use of an improved Bayesian selection method, compared to the colour cuts used for the Red Book calculation, allowing the identification of fainter quasars, down to
J
AB
∼ 23. Quasars at
z
> 8 may be selected from
Euclid
O
Y
J
H
photometry alone, but selection over the redshift interval 7 <
z
< 8 is greatly improved by the addition of
z
-band data from, e.g., Pan-STARRS and LSST. We calculate predicted quasar yields for the assumed values of the rate of decline of the QLF beyond
z
= 6. If the decline of the QLF accelerates beyond
z
= 6, with
k
= −0.92,
Euclid
should nevertheless find over 100 quasars with 7.0 <
z
< 7.5, and ∼25 quasars beyond the current record of
z
= 7.5, including ∼8 beyond
z
= 8.0. The first
Euclid
quasars at
z
> 7.5 should be found in the DR1 data release, expected in 2024. It will be possible to determine the bright-end slope of the QLF, 7 <
z
< 8,
M
1450
< −25, using 8 m class telescopes to confirm candidates, but follow-up with JWST or E-ELT will be required to measure the faint-end slope. Contamination of the candidate lists is predicted to be modest even at
J
AB
∼ 23. The precision with which
k
can be determined over 7 <
z
< 8 depends on the value of
k
, but assuming
k
= −0.72 it can be measured to a 1
σ
uncertainty of 0.07.
Context. In the last decade, astronomers have found a new type of supernova called superluminous supernovae (SLSNe) due to their high peak luminosity and long light-curves. These hydrogen-free ...explosions (SLSNe-I) can be seen to z ~ 4 and therefore, offer the possibility of probing the distant Universe. Aims. We aim to investigate the possibility of detecting SLSNe-I using ESA’s Euclid satellite, scheduled for launch in 2020. In particular, we study the Euclid Deep Survey (EDS) which will provide a unique combination of area, depth and cadence over the mission. Methods. We estimated the redshift distribution of Euclid SLSNe-I using the latest information on their rates and spectral energy distribution, as well as known Euclid instrument and survey parameters, including the cadence and depth of the EDS. To estimate the uncertainties, we calculated their distribution with two different set-ups, namely optimistic and pessimistic, adopting different star formation densities and rates. We also applied a standardization method to the peak magnitudes to create a simulated Hubble diagram to explore possible cosmological constraints. Results. We show that Euclid should detect approximately 140 high-quality SLSNe-I to z ~ 3.5 over the first five years of the mission (with an additional 70 if we lower our photometric classification criteria). This sample could revolutionize the study of SLSNe-I at z > 1 and open up their use as probes of star-formation rates, galaxy populations, the interstellar and intergalactic medium. In addition, a sample of such SLSNe-I could improve constraints on a time-dependent dark energy equation-of-state, namely w(a), when combined with local SLSNe-I and the expected SN Ia sample from the Dark Energy Survey. Conclusions. We show that Euclid will observe hundreds of SLSNe-I for free. These luminous transients will be in the Euclid data-stream and we should prepare now to identify them as they offer a new probe of the high-redshift Universe for both astrophysics and cosmology.
Upcoming surveys will map the growth of large-scale structure with unprecented precision, improving our understanding of the dark sector of the Universe. Unfortunately, much of the cosmological ...information is encoded on small scales, where the clustering of dark matter and the effects of astrophysical feedback processes are not fully understood. This can bias the estimates of cosmological parameters, which we study here for a joint analysis of mock
Euclid
cosmic shear and
Planck
cosmic microwave background data. We use different implementations for the modelling of the signal on small scales and find that they result in significantly different predictions. Moreover, the different non-linear corrections lead to biased parameter estimates, especially when the analysis is extended into the highly non-linear regime, with the Hubble constant,
H
0
, and the clustering amplitude,
σ
8
, affected the most. Improvements in the modelling of non-linear scales will therefore be needed if we are to resolve the current tension with more and better data. For a given prescription for the non-linear power spectrum, using different corrections for baryon physics does not significantly impact the precision of
Euclid
, but neglecting these correction does lead to large biases in the cosmological parameters. In order to extract precise and unbiased constraints on cosmological parameters from
Euclid
cosmic shear data, it is therefore essential to improve the accuracy of the recipes that account for non-linear structure formation, as well as the modelling of the impact of astrophysical processes that redistribute the baryons.
In physically realistic, scalar-field-based dynamical dark energy models (including, e.g., quintessence), one naturally expects the scalar field to couple to the rest of the model’s degrees of ...freedom. In particular, a coupling to the electromagnetic sector leads to a time (redshift) dependence in the fine-structure constant and a violation of the weak equivalence principle. Here we extend the previous
Euclid
forecast constraints on dark energy models to this enlarged (but physically more realistic) parameter space, and forecast how well
Euclid
, together with high-resolution spectroscopic data and local experiments, can constrain these models. Our analysis combines simulated
Euclid
data products with astrophysical measurements of the fine-structure constant,
α
, and local experimental constraints, and it includes both parametric and non-parametric methods. For the astrophysical measurements of
α
, we consider both the currently available data and a simulated dataset representative of Extremely Large Telescope measurements that are expected to be available in the 2030s. Our parametric analysis shows that in the latter case, the inclusion of astrophysical and local data improves the
Euclid
dark energy figure of merit by between 8% and 26%, depending on the correct fiducial model, with the improvements being larger in the null case where the fiducial coupling to the electromagnetic sector is vanishing. These improvements would be smaller with the current astrophysical data. Moreover, we illustrate how a genetic algorithms based reconstruction provides a null test for the presence of the coupling. Our results highlight the importance of complementing surveys like
Euclid
with external data products, in order to accurately test the wider parameter spaces of physically motivated paradigms.
The
Euclid
mission – with its spectroscopic galaxy survey covering a sky area over 15 000 deg
2
in the redshift range 0.9 <
z
< 1.8 – will provide a sample of tens of thousands of cosmic voids. ...This paper thoroughly explores for the first time the constraining power of the void size function on the properties of dark energy (DE) from a survey mock catalogue, the official
Euclid
Flagship simulation. We identified voids in the Flagship light-cone, which closely matches the features of the upcoming
Euclid
spectroscopic data set. We modelled the void size function considering a state-of-the art methodology: we relied on the volume-conserving (Vdn) model, a modification of the popular Sheth & van de Weygaert model for void number counts, extended by means of a linear function of the large-scale galaxy bias. We found an excellent agreement between model predictions and measured mock void number counts. We computed updated forecasts for the
Euclid
mission on DE from the void size function and provided reliable void number estimates to serve as a basis for further forecasts of cosmological applications using voids. We analysed two different cosmological models for DE: the first described by a constant DE equation of state parameter,
w
, and the second by a dynamic equation of state with coefficients
w
0
and
w
a
. We forecast 1
σ
errors on
w
lower than 10% and we estimated an expected figure of merit (FoM) for the dynamical DE scenario FoM
w
0
,
w
a
= 17 when considering only the neutrino mass as additional free parameter of the model. The analysis is based on conservative assumptions to ensure full robustness, and is a pathfinder for future enhancements of the technique. Our results showcase the impressive constraining power of the void size function from the
Euclid
spectroscopic sample, both as a stand-alone probe, and to be combined with other
Euclid
cosmological probes.
Weak lensing, which is the deflection of light by matter along the line of sight, has proven to be an efficient method for constraining models of structure formation and reveal the nature of dark ...energy. So far, most weak-lensing studies have focused on the shear field that can be measured directly from the ellipticity of background galaxies. However, within the context of forthcoming full-sky weak-lensing surveys such as
Euclid
, convergence maps (mass maps) offer an important advantage over shear fields in terms of cosmological exploitation. While it carry the same information, the lensing signal is more compressed in the convergence maps than in the shear field. This simplifies otherwise computationally expensive analyses, for instance, non-Gaussianity studies. However, the inversion of the non-local shear field requires accurate control of systematic effects caused by holes in the data field, field borders, shape noise, and the fact that the shear is not a direct observable (reduced shear). We present the two mass-inversion methods that are included in the official
Euclid
data-processing pipeline: the standard Kaiser & Squires method (KS), and a new mass-inversion method (KS+) that aims to reduce the information loss during the mass inversion. This new method is based on the KS method and includes corrections for mass-mapping systematic effects. The results of the KS+ method are compared to the original implementation of the KS method in its simplest form, using the
Euclid
Flagship mock galaxy catalogue. In particular, we estimate the quality of the reconstruction by comparing the two-point correlation functions and third- and fourth-order moments obtained from shear and convergence maps, and we analyse each systematic effect independently and simultaneously. We show that the KS+ method substantially reduces the errors on the two-point correlation function and moments compared to the KS method. In particular, we show that the errors introduced by the mass inversion on the two-point correlation of the convergence maps are reduced by a factor of about 5, while the errors on the third- and fourth-order moments are reduced by factors of about 2 and 10, respectively.
We present the first measurements of the weak gravitational lensing signal induced by the large-scale mass distribution in the universe from data obtained as part of the ongoing Canada-France-Hawaii ...Telescope Legacy Survey (CFHTLS). The data used in this analysis are from the Wide Synoptic Survey, which aims to image 6170 deg super(2) in five filters. We have analyzed an effective area of 622 deg super(2) (31 pointings) of i data spread over two of the three survey fields. These data are of excellent quality, and the results bode well for the remainder of the survey: we do not detect a significant "B" mode, suggesting that residual systematics are negligible at the current level of accuracy. Assuming a cold dark matter model and marginalizing over the Hubble parameter h e 0.6, 0.8, the source redshift distribution, and systematics, we constrain s sub(8), the amplitude of the matter power spectrum. At a fiducial matter density sub(m) = 0.3 we find s sub(8) = 0.85 c 0.06. This estimate is in excellent agreement with previous studies. A combination of our results with those from the Deep component of the CFHTLS enables us to place a constraint on a constant equation of state for the dark energy, based on cosmic shear data alone. We find that w sub(0) < -0.8 at 68% confidence.