In physically realistic, scalar-field-based dynamical dark energy models (including, e.g., quintessence), one naturally expects the scalar field to couple to the rest of the model’s degrees of ...freedom. In particular, a coupling to the electromagnetic sector leads to a time (redshift) dependence in the fine-structure constant and a violation of the weak equivalence principle. Here we extend the previous
Euclid
forecast constraints on dark energy models to this enlarged (but physically more realistic) parameter space, and forecast how well
Euclid
, together with high-resolution spectroscopic data and local experiments, can constrain these models. Our analysis combines simulated
Euclid
data products with astrophysical measurements of the fine-structure constant,
α
, and local experimental constraints, and it includes both parametric and non-parametric methods. For the astrophysical measurements of
α
, we consider both the currently available data and a simulated dataset representative of Extremely Large Telescope measurements that are expected to be available in the 2030s. Our parametric analysis shows that in the latter case, the inclusion of astrophysical and local data improves the
Euclid
dark energy figure of merit by between 8% and 26%, depending on the correct fiducial model, with the improvements being larger in the null case where the fiducial coupling to the electromagnetic sector is vanishing. These improvements would be smaller with the current astrophysical data. Moreover, we illustrate how a genetic algorithms based reconstruction provides a null test for the presence of the coupling. Our results highlight the importance of complementing surveys like
Euclid
with external data products, in order to accurately test the wider parameter spaces of physically motivated paradigms.
Multi-Zonal computer generated holograms (MZ-CGHs) in combination with interferometric wavefront measurements are perfectly suited as optical adjustment tools - especially if the demands on the ...alignment accuracy are very high. After reviewing the basic idea for alignment with MZ-CGHs, we derive the analytic relation between the interferometrically observed tilt and power values and the associated lens placement errors, including estimates of the applied approximations. This analysis yields the parameters determining the principle sensitivity of the method. Subsequently, the achievable accuracy of large 6″ MZ-CGHs in practical application is tested with a series of different optical measurements which confirm the technical feasibility. The productive use of the technique will be presented in part II of the paper for different examples in the framework of the Euclid space telescope.
Context.
Stage IV weak lensing experiments will offer more than an order of magnitude leap in precision. We must therefore ensure that our analyses remain accurate in this new era. Accordingly, ...previously ignored systematic effects must be addressed.
Aims.
In this work, we evaluate the impact of the reduced shear approximation and magnification bias on information obtained from the angular power spectrum. To first-order, the statistics of reduced shear, a combination of shear and convergence, are taken to be equal to those of shear. However, this approximation can induce a bias in the cosmological parameters that can no longer be neglected. A separate bias arises from the statistics of shear being altered by the preferential selection of galaxies and the dilution of their surface densities in high-magnification regions.
Methods.
The corrections for these systematic effects take similar forms, allowing them to be treated together. We calculated the impact of neglecting these effects on the cosmological parameters that would be determined from
Euclid
, using cosmic shear tomography. To do so, we employed the Fisher matrix formalism, and included the impact of the super-sample covariance. We also demonstrate how the reduced shear correction can be calculated using a lognormal field forward modelling approach.
Results.
These effects cause significant biases in Ω
m
,
σ
8
,
n
s
, Ω
DE
,
w
0
, and
w
a
of −0.53
σ
, 0.43
σ
, −0.34
σ
, 1.36
σ
, −0.68
σ
, and 1.21
σ
, respectively. We then show that these lensing biases interact with another systematic effect: the intrinsic alignment of galaxies. Accordingly, we have developed the formalism for an intrinsic alignment-enhanced lensing bias correction. Applying this to
Euclid
, we find that the additional terms introduced by this correction are sub-dominant.
Context.
The ESA
Euclid
space telescope could observe up to 150 000 asteroids as a side product of its primary cosmological mission. Asteroids appear as trailed sources, that is streaks, in the ...images. Owing to the survey area of 15 000 square degrees and the number of sources, automated methods have to be used to find them.
Euclid
is equipped with a visible camera, VIS (VISual imager), and a near-infrared camera, NISP (Near-Infrared Spectrometer and Photometer), with three filters.
Aims.
We aim to develop a pipeline to detect fast-moving objects in
Euclid
images, with both high completeness and high purity.
Methods.
We tested the
StreakDet
software to find asteroids from simulated
Euclid
images. We optimized the parameters of
StreakDet
to maximize completeness, and developed a post-processing algorithm to improve the purity of the sample of detected sources by removing false-positive detections.
Results.
StreakDet
finds 96.9% of the synthetic asteroid streaks with apparent magnitudes brighter than 23rd magnitude and streak lengths longer than 15 pixels (10 arcsec h
−1
), but this comes at the cost of finding a high number of false positives. The number of false positives can be radically reduced with multi-streak analysis, which utilizes all four dithers obtained by
Euclid
.
Conclusions.
StreakDet
is a good tool for identifying asteroids in
Euclid
images, but there is still room for improvement, in particular, for finding short (less than 13 pixels, corresponding to 8 arcsec h
−1
) and/or faint streaks (fainter than the apparent magnitude of 23).
ABSTRACT
We present a new, updated version of the EuclidEmulator (called EuclidEmulator2), a fast and accurate predictor for the nonlinear correction of the matter power spectrum. 2 per cent level ...accurate emulation is now supported in the eight-dimensional parameter space of w0waCDM+∑mν models between redshift z = 0 and z = 3 for spatial scales within the range $0.01 \, h\, {\rm Mpc}^{-1}\le k \le 10\, h\, {\rm Mpc}^{-1}$. In order to achieve this level of accuracy, we have had to improve the quality of the underlying N-body simulations used as training data: (i) we use self-consistent linear evolution of non-dark matter species such as massive neutrinos, photons, dark energy, and the metric field, (ii) we perform the simulations in the so-called N-body gauge, which allows one to interpret the results in the framework of general relativity, (iii) we run over 250 high-resolution simulations with 30003 particles in boxes of 1(h−1 Gpc)3 volumes based on paired-and-fixed initial conditions, and (iv) we provide a resolution correction that can be applied to emulated results as a post-processing step in order to drastically reduce systematic biases on small scales due to residual resolution effects in the simulations. We find that the inclusion of the dynamical dark energy parameter wa significantly increases the complexity and expense of creating the emulator. The high fidelity of EuclidEmulator2 is tested in various comparisons against N-body simulations as well as alternative fast predictors such as HALOFIT, HMCode, and CosmicEmu. A blind test is successfully performed against the Euclid Flagship v2.0 simulation. Nonlinear correction factors emulated with EuclidEmulator2 are accurate at the level of $1{{\ \rm per\ cent}}$ or better for $0.01 \, h\, {\rm Mpc}^{-1}\le k \le 10\, h\, {\rm Mpc}^{-1}$ and z ≤ 3 compared to high-resolution dark-matter-only simulations. EuclidEmulator2 is publicly available at https://github.com/miknab/EuclidEmulator2.
Context.
In metric theories of gravity with photon number conservation, the luminosity and angular diameter distances are related via the Etherington relation, also known as the distance duality ...relation (DDR). A violation of this relation would rule out the standard cosmological paradigm and point to the presence of new physics.
Aims.
We quantify the ability of
Euclid
, in combination with contemporary surveys, to improve the current constraints on deviations from the DDR in the redshift range 0 <
z
< 1.6.
Methods.
We start with an analysis of the latest available data, improving previously reported constraints by a factor of 2.5. We then present a detailed analysis of simulated
Euclid
and external data products, using both standard parametric methods (relying on phenomenological descriptions of possible DDR violations) and a machine learning reconstruction using genetic algorithms.
Results.
We find that for parametric methods
Euclid
can (in combination with external probes) improve current constraints by approximately a factor of six, while for non-parametric methods
Euclid
can improve current constraints by a factor of three.
Conclusions.
Our results highlight the importance of surveys like
Euclid
in accurately testing the pillars of the current cosmological paradigm and constraining physics beyond the standard cosmological model.
Euclid preparation Desprez, G.; Coupon, J.; Almosallam, I. ...
Astronomy and astrophysics (Berlin),
12/2020, Letnik:
644
Journal Article
Recenzirano
Odprti dostop
Forthcoming large photometric surveys for cosmology require precise and accurate photometric redshift (photo-
z
) measurements for the success of their main science objectives. However, to date, no ...method has been able to produce photo-
z
s at the required accuracy using only the broad-band photometry that those surveys will provide. An assessment of the strengths and weaknesses of current methods is a crucial step in the eventual development of an approach to meet this challenge. We report on the performance of 13 photometric redshift code single value redshift estimates and redshift probability distributions (PDZs) on a common set of data, focusing particularly on the 0.2 − 2.6 redshift range that the
Euclid
mission will probe. We designed a challenge using emulated
Euclid
data drawn from three photometric surveys of the COSMOS field. The data was divided into two samples: one calibration sample for which photometry and redshifts were provided to the participants; and the validation sample, containing only the photometry to ensure a blinded test of the methods. Participants were invited to provide a redshift single value estimate and a PDZ for each source in the validation sample, along with a rejection flag that indicates the sources they consider unfit for use in cosmological analyses. The performance of each method was assessed through a set of informative metrics, using cross-matched spectroscopic and highly-accurate photometric redshifts as the ground truth. We show that the rejection criteria set by participants are efficient in removing strong outliers, that is to say sources for which the photo-
z
deviates by more than 0.15(1 +
z
) from the spectroscopic-redshift (spec-
z
). We also show that, while all methods are able to provide reliable single value estimates, several machine-learning methods do not manage to produce useful PDZs. We find that no machine-learning method provides good results in the regions of galaxy color-space that are sparsely populated by spectroscopic-redshifts, for example
z
> 1. However they generally perform better than template-fitting methods at low redshift (
z
< 0.7), indicating that template-fitting methods do not use all of the information contained in the photometry. We introduce metrics that quantify both photo-
z
precision and completeness of the samples (post-rejection), since both contribute to the final figure of merit of the science goals of the survey (e.g., cosmic shear from
Euclid
). Template-fitting methods provide the best results in these metrics, but we show that a combination of template-fitting results and machine-learning results with rejection criteria can outperform any individual method. On this basis, we argue that further work in identifying how to best select between machine-learning and template-fitting approaches for each individual galaxy should be pursued as a priority.
Euclid
is poised to survey galaxies across a cosmological volume of unprecedented size, providing observations of more than a billion objects distributed over a third of the full sky. Approximately ...20 million of these galaxies will have their spectroscopy available, allowing us to map the three-dimensional large-scale structure of the Universe in great detail. This paper investigates prospects for the detection of cosmic voids therein and the unique benefit they provide for cosmological studies. In particular, we study the imprints of dynamic (redshift-space) and geometric (Alcock–Paczynski) distortions of average void shapes and their constraining power on the growth of structure and cosmological distance ratios. To this end, we made use of the Flagship mock catalog, a state-of-the-art simulation of the data expected to be observed with
Euclid
. We arranged the data into four adjacent redshift bins, each of which contains about 11 000 voids and we estimated the stacked void-galaxy cross-correlation function in every bin. Fitting a linear-theory model to the data, we obtained constraints on
f
/
b
and
D
M
H
, where
f
is the linear growth rate of density fluctuations,
b
the galaxy bias,
D
M
the comoving angular diameter distance, and
H
the Hubble rate. In addition, we marginalized over two nuisance parameters included in our model to account for unknown systematic effects in the analysis. With this approach,
Euclid
will be able to reach a relative precision of about 4% on measurements of
f
/
b
and 0.5% on
D
M
H
in each redshift bin. Better modeling or calibration of the nuisance parameters may further increase this precision to 1% and 0.4%, respectively. Our results show that the exploitation of cosmic voids in
Euclid
will provide competitive constraints on cosmology even as a stand-alone probe. For example, the equation-of-state parameter,
w
, for dark energy will be measured with a precision of about 10%, consistent with previous more approximate forecasts.
We present a tomographic weak lensing analysis of the Kilo Degree Survey Data Release 4 (KiDS-1000), using a new pseudo angular power spectrum estimator (pseudo-
C
ℓ
) under development for the ESA
...Euclid
mission. Over 21 million galaxies with shape information are divided into five tomographic redshift bins, ranging from 0.1 to 1.2 in photometric redshift. We measured pseudo-
C
ℓ
using eight bands in the multipole range 76 <
ℓ
< 1500 for auto- and cross-power spectra between the tomographic bins. A series of tests were carried out to check for systematic contamination from a variety of observational sources including stellar number density, variations in survey depth, and point spread function properties. While some marginal correlations with these systematic tracers were observed, there is no evidence of bias in the cosmological inference.
B
-mode power spectra are consistent with zero signal, with no significant residual contamination from
E
/
B
-mode leakage. We performed a Bayesian analysis of the pseudo-
C
ℓ
estimates by forward modelling the effects of the mask. Assuming a spatially flat ΛCDM cosmology, we constrained the structure growth parameter
S
8
=
σ
8
(Ω
m
/0.3)
1/2
= 0.754
−0.029
+0.027
. When combining cosmic shear from KiDS-1000 with baryon acoustic oscillation and redshift space distortion data from recent Sloan Digital Sky Survey (SDSS) measurements of luminous red galaxies, as well as the Lyman-
α
forest and its cross-correlation with quasars, we tightened these constraints to
S
8
= 0.771
−0.032
+0.006
. These results are in very good agreement with previous KiDS-1000 and SDSS analyses and confirm a ∼3
σ
tension with early-Universe constraints from cosmic microwave background experiments.